aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1707.01702
2726438116
Consider the following variant of the set cover problem. We are given a universe @math and a collection of subsets @math where @math . For every element @math we need to find a set @math such that @math . Once we construct and fix the mapping @math a subset @math of the universe is revealed, and we need to cover all elements from @math with exactly @math . The goal is to find a mapping such that the cover @math is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate the approximability of such problems under the regime of worst case analysis or when the input instance is drawn randomly from some probability distribution. Here one typically compares the quality of the produced solution with the optimal offline solution. In this paper we consider a different viewpoint: What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? We show that under this viewpoint it is possible to achieve improved approximation algorithms for the stochastic version of universal set cover. Our result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization. The same basic approach leads to improved approximation algorithms also for other related problems.
One can of course consider the (offline) stochastic version of optimization problems. For example, @math -stage stochastic set cover is studied in @cite_32 @cite_41 , with an improved approximation factor (independent from @math ) later given in @cite_21 .
{ "cite_N": [ "@cite_41", "@cite_21", "@cite_32" ], "mid": [ "", "2019875157", "2002285507" ], "abstract": [ "", "We present improved approximation algorithms in stochastic optimization. We prove that the multi-stage stochastic versions of covering integer programs (such as set cover and vertex cover) admit essentially the same approximation algorithms as their standard (non-stochastic) counterparts; this improves upon work of Swamy & Shmoys that shows an approximability which depends multiplicatively on the number of stages. We also present approximation algorithms for facility location and some of its variants in the 2-stage recourse model, improving on previous approximation guarantees.", "Combinatorial optimization is often used to \"plan ahead,\" purchasing and allocating resources for demands that are not precisely known at the time of solution. This advance planning may be done because resources become very expensive to purchase or difficult to allocate at the last minute when the demands are known. In this work we study the tradeoffs involved in making some purchase allocation decisions early to reduce cost while deferring others at greater expense to take advantage of additional, late-arriving information. We consider a number of combinatorial optimization problems in which the problem instance is uncertain---modeled by a probability distribution---and in which solution elements can be purchased cheaply now or at greater expense after the distribution is sampled. We show how to approximately optimize the choice of what to purchase in advance and what to defer." ] }
1707.01423
2726289236
Many practical problems are characterized by a preference relation over admissible solutions, where preferred solutions are minimal in some sense. For example, a preferred diagnosis usually comprises a minimal set of reasons that is sufficient to cause the observed anomaly. Alternatively, a minimal correction subset comprises a minimal set of reasons whose deletion is sufficient to eliminate the observed anomaly. Circumscription formalizes such preference relations by associating propositional theories with minimal models. The resulting enumeration problem is addressed here by means of a new algorithm taking advantage of unsatisfiable core analysis. Empirical evidence of the efficiency of the algorithm is given by comparing the performance of the resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the enumeration of minimal models for problems originating from practical applications. This paper is under consideration for acceptance in TPLP.
The modification strategy described above is actually the one implemented by many MaxSAT algorithms based on unsatisfiable core analysis @cite_6 . In Algorithm , the unsatisfiable core analysis is performed according to @cite_23 . This design choice is motivated by the fact that the fresh variables @math can be later assumed true in order to trivially satisfy the cardinality constraint and the implications introduced by the unsatisfiable core analysis, a feature not required for computing a single solution for a given MaxSAT instance. Eventually, the algorithm can be adapted to use different unsatisfiable core analysis techniques, in particular @cite_17 and @cite_23 .
{ "cite_N": [ "@cite_23", "@cite_6", "@cite_17" ], "mid": [ "2407856297", "2029958553", "" ], "abstract": [ "Core-guided algorithms proved to be effective on industrial instances of MaxSAT, the optimization variant of the satisfiability problem for propositional formulas. These algorithms work by iteratively checking satisfiability of a formula that is relaxed at each step by using the information provided by unsatisfiable cores. The paper introduces a new core-guided algorithm that adds cardinality constraints for each detected core, but also limits the number of literals in each constraint in order to control the number of refutations in subsequent satisfiability checks. The performance gain of the new algorithm is assessed on the industrial instances of the 2014 MaxSAT Evaluation.", "Maximum Satisfiability (MaxSAT) is an optimization version of SAT, and many real world applications can be naturally encoded as such. Solving MaxSAT is an important problem from both a theoretical and a practical point of view. In recent years, there has been considerable interest in developing efficient algorithms and several families of algorithms have been proposed. This paper overviews recent approaches to handle MaxSAT and presents a survey of MaxSAT algorithms based on iteratively calling a SAT solver which are particularly effective to solve problems arising in industrial settings. First, classic algorithms based on iteratively calling a SAT solver and updating a bound are overviewed. Such algorithms are referred to as iterative MaxSAT algorithms. Then, more sophisticated algorithms that additionally take advantage of unsatisfiable cores are described, which are referred to as core-guided MaxSAT algorithms. Core-guided MaxSAT algorithms use the information provided by unsatisfiable cores to relax clauses on demand and to create simpler constraints. Finally, a comprehensive empirical study on non-random benchmarks is conducted, including not only the surveyed algorithms, but also other state-of-the-art MaxSAT solvers. The results indicate that (i) core-guided MaxSAT algorithms in general abort in less instances than classic solvers based on iteratively calling a SAT solver and that (ii) core-guided MaxSAT algorithms are fairly competitive compared to other approaches.", "" ] }
1707.01423
2726289236
Many practical problems are characterized by a preference relation over admissible solutions, where preferred solutions are minimal in some sense. For example, a preferred diagnosis usually comprises a minimal set of reasons that is sufficient to cause the observed anomaly. Alternatively, a minimal correction subset comprises a minimal set of reasons whose deletion is sufficient to eliminate the observed anomaly. Circumscription formalizes such preference relations by associating propositional theories with minimal models. The resulting enumeration problem is addressed here by means of a new algorithm taking advantage of unsatisfiable core analysis. Empirical evidence of the efficiency of the algorithm is given by comparing the performance of the resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the enumeration of minimal models for problems originating from practical applications. This paper is under consideration for acceptance in TPLP.
The algorithm implemented by @cite_13 is specifically conceived to address minimal correction subset enumeration, which is also considered in our experimental analysis. adds to the input theory a cardinality constraint in order to compute models of bounded size; such a bound is iteratively increased until all minimal correction sets are computed. It turns out that cannot run smoothly on an incremental solver, and in fact some of the learned clauses have to be eliminated when the bound of the cardinality constraint is changed. Such a drawback also affects the more general algorithm introduced by DBLP:conf ecai FaberVCG16 DBLP:conf ecai FaberVCG16 : an external solver is used to enumerate cardinality minimal solutions of the input problem, and blocking clauses are then added to the theory so that the external solver can be invoked again for enumerating cardinality minimal solutions of the new theory; the process is repeated until the theory becomes unsatisfiable.
{ "cite_N": [ "@cite_13" ], "mid": [ "2020169768" ], "abstract": [ "Much research in the area of constraint processing has recently been focused on extracting small unsatisfiable \"cores\" from unsatisfiable constraint systems with the goal of finding minimal unsatisfiable subsets (MUSes). While most techniques have provided ways to find an approximation of an MUS (not necessarily minimal), we have developed a sound and complete algorithm for producing all MUSes of an unsatisfiable constraint system. In this paper, we describe a relationship between satisfiable and unsatisfiable subsets of constraints that we subsequently use as the foundation for MUS extraction algorithms, implemented for Boolean satisfiability constraints. The algorithms provide a framework with which many related subproblems can be solved, including relaxations of completeness to handle intractable instances, and we develop several variations of the basic algorithms to illustrate this. Experimental results demonstrate the performance of our algorithms, showing how the base algorithms run quickly on many instances, while the variations are valuable for producing results on instances whose complete results are intractably large. Furthermore, our algorithms are shown to perform better than the existing algorithms for solving either of the two distinct phases of our approach." ] }
1707.01423
2726289236
Many practical problems are characterized by a preference relation over admissible solutions, where preferred solutions are minimal in some sense. For example, a preferred diagnosis usually comprises a minimal set of reasons that is sufficient to cause the observed anomaly. Alternatively, a minimal correction subset comprises a minimal set of reasons whose deletion is sufficient to eliminate the observed anomaly. Circumscription formalizes such preference relations by associating propositional theories with minimal models. The resulting enumeration problem is addressed here by means of a new algorithm taking advantage of unsatisfiable core analysis. Empirical evidence of the efficiency of the algorithm is given by comparing the performance of the resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the enumeration of minimal models for problems originating from practical applications. This paper is under consideration for acceptance in TPLP.
Finally, subset minimality is among the preferences natively supported in the language of @cite_0 @cite_25 @cite_22 , a versatile framework built on top of @cite_26 . The algorithm implemented by is also iterative, meaning that better and better models are computed until an inconsistency arises. Differently from other iterative algorithms, however, the improvement on the current model is enforced by means of a , which is possibly specified by the user in case of custom preferences. was not tested in the experiment because its performance is clearly bounded by the underlying ASP solver, and therefore by the heuristic algorithm of in the setting considered in this paper.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_26", "@cite_25" ], "mid": [ "2155378065", "2579855366", "2619622334", "2212328841" ], "abstract": [ "In this paper we describe asprin, a general, flexible, and extensible framework for handling preferences among the stable models of a logic program. We show how complex preference relations can be specified through user-defined preference types and their arguments. We describe how preference specifications are handled internally by so-called preference programs, which are used for dominance testing. We also give algorithms for computing one, or all, optimal stable models of a logic program. Notably, our algorithms depend on the complexity of the dominance tests and make use of multi-shot answer set solving technology.", "We introduce a comprehensive framework for computing diverse (or similar) solutions to logic programs with preferences. Our framework provides a wide spectrum of complete and incomplete methods for solving this task. Apart from proposing several new methods, it also accommodates existing ones and generalizes them to programs with preferences. Interestingly, this is accomplished by integrating and automating several basic ASP techniques - being of general interest even beyond diversification. The enabling factor of this lies in the recent advance of multi-shot ASP solving that provides us with fine-grained control over reasoning processes and abolishes the need for solver modifications and wrappers that were indispensable in previous approaches. Our framework is implemented as an extension to the ASP-based preference handling system asprin. We use the resulting system asprin 2 for an empirical evaluation of the diversification methods comprised in our framework.", "We introduce a new flexible paradigm of grounding and solving in Answer Set Programming (ASP), which we refer to as multi-shot ASP solving, and present its implementation in the ASP system clingo. Multi-shot ASP solving features grounding and solving processes that deal with continuously changing logic programs. In doing so, they remain operative and accommodate changes in a seamless way. For instance, such processes allow for advanced forms of search, as in optimization or theory solving, or interaction with an environment, as in robotics or query-answering. Common to them is that the problem specification evolves during the reasoning process, either because data or constraints are added, deleted, or replaced. This evolutionary aspect adds another dimension to ASP since it brings about state changing operations. We address this issue by providing an operational semantics that characterizes grounding and solving processes in multi-shot ASP solving. This characterization provides a semantic account of grounder and solver states along with the operations manipulating them. The operative nature of multi-shot solving avoids redundancies in relaunching grounder and solver programs and benefits from the solver's learning capacities. clingo accomplishes this by complementing ASP's declarative input language with control capacities. On the declarative side, a new directive allows for structuring logic programs into named and parameterizable subprograms. The grounding and integration of these subprograms into the solving process is completely modular and fully controllable from the procedural side. To this end, clingo offers a new application programming interface that is conveniently accessible via scripting languages.", "asprin offers a framework for expressing and evaluating combinations of quantitative and qualitative preferences among the stable models of a logic program. In this paper, we demonstrate the generality and flexibility of the methodology by showing how easily existing preference relations can be implemented in asprin. Moreover, we show how the computation of optimal stable models can be improved by using declarative heuristics. We empirically evaluate our contributions and contrast them with dedicated implementations. Finally, we detail key aspects of asprin’s implementation." ] }
1707.01428
2725281132
Computer science is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effective training of these models, however, is not trivial due in part to hyperparameters: user-configured values that parametrize learning models and control their ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We demonstrate that our framework scales to 1400 heterogeneous cores and that it achieves a factor of 1.6 speedup in the necessary time to find an optimal set of hyperparameters over a standard distributed hyperparameter optimization framework.
Hyperparameter optimization methods typically address the problem of how to choose the next hyperparameter value to search. Manual tuning and grid search @cite_45 are popular methods because they are easy to implement, however they rely upon domain knowledge and will skip over many values in continuous domains. Bergstra and Bengio @cite_9 argued to replace these practices with random search because it is just as easy to implement and does not require discretizing the search space. This scheme will more thoroughly search the hyperparameter spaces, however it has no mechanism for narrowing the scope of the search. Random search is the basis for Hyperopt @cite_46 @cite_2 , a widely-used open source hyperparameter optimization framework.
{ "cite_N": [ "@cite_46", "@cite_9", "@cite_45", "@cite_2" ], "mid": [ "1437335841", "", "2098368939", "2189149359" ], "abstract": [ "Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.", "", "Multiclass SVMs are usually implemented by combining several two-class SVMs. The one-versus-all method using winner-takes-all strategy and the one-versus-one method implemented by max-wins voting are popularly used for this purpose. In this paper we give empirical evidence to show that these methods are inferior to another one-versus-one method: one that uses Platt's posterior probabilities together with the pairwise coupling idea of Hastie and Tibshirani. The evidence is particularly strong when the training dataset is sparse.", "Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. The paper closes with some discussion of ongoing and future work." ] }
1707.01428
2725281132
Computer science is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effective training of these models, however, is not trivial due in part to hyperparameters: user-configured values that parametrize learning models and control their ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We demonstrate that our framework scales to 1400 heterogeneous cores and that it achieves a factor of 1.6 speedup in the necessary time to find an optimal set of hyperparameters over a standard distributed hyperparameter optimization framework.
Guided approaches to hyperparameter optimization are also popular, notably genetic algorithms and Bayesian optimization strategies. Genetic algorithms @cite_5 @cite_35 @cite_28 have historically been applied to hyperparameter optimization, however they are prohibitively expensive as the number of hyperparameters increases @cite_31 . Sequential Model-based Bayesian Optimization @cite_29 , Tree-Structured Parzen Estimators @cite_31 , Gaussian process-based estimation @cite_13 , and Sequential Model-based Algorithm Configuration @cite_34 are popular Bayesian optimization strategies, implemented in a number of open source and proprietary frameworks @cite_2 @cite_30 @cite_33 @cite_7 @cite_13 @cite_39 @cite_23 . These methods use previous hyperparameter values and their corresponding evaluations as priors for approximating viable hyperparameter values, and as such can become stuck in local minima.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_7", "@cite_28", "@cite_29", "@cite_39", "@cite_23", "@cite_2", "@cite_5", "@cite_31", "@cite_34", "@cite_13" ], "mid": [ "191027614", "1986490585", "2400200933", "", "", "1510052597", "", "", "2189149359", "", "", "2143192733", "2950182411" ], "abstract": [ "We present Optunity, a Python library which bundles various strategies to solve hyperparameter tuning problems. The library provides general purpose algorithms, ranging from undirected search methods to adaptive methods based on refinement strategies, heuristics and evolutionary computing. Optunity aspires to become a Swiss army knife to solve tuning problems of any nature. Its design focuses on code clarity, flexibility and ease of use.", "The problem of model selection for support vector machines (SVMs) is considered. We propose an evolutionary approach to determine multiple SVM hyperparameters: The covariance matrix adaptation evolution strategy (CMA-ES) is used to determine the kernel from a parameterized kernel space and to control the regularization. Our method is applicable to optimize non-differentiable kernel functions and arbitrary model selection criteria. We demonstrate on benchmark datasets that the CMA-ES improves the results achieved by grid search already when applied to few hyperparameters. Further, we show that the CMA-ES is able to handle much more kernel parameters compared to grid-search and that tuning of the scaling and the rotation of Gaussian kernels can lead to better results in comparison to standard Gaussian kernels with a single bandwidth parameter. In particular, more flexibility of the kernel can reduce the number of support vectors.", "Bayesian optimization is an elegant solution to the hyperparameter optimization problem in machine learning. Building a reliable and robust Bayesian optimization service requires careful testing methodology and sound statistical analysis. In this talk we will outline our development of an evaluation framework to rigorously test and measure the impact of changes to the SigOpt optimization service. We present an overview of our evaluation system and discuss how this framework empowers our research engineers to confidently and quickly make changes to our core optimization engine", "", "", "In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.", "", "", "Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. The paper closes with some discussion of ongoing and future work.", "", "", "We benchmark a sequential model-based optimization procedure, SMAC-BBOB, on the BBOB set of blackbox functions. We demonstrate that with a small budget of 10xD evaluations of D-dimensional functions, SMAC-BBOB in most cases outperforms the state-of-the-art blackbox optimizer CMA-ES. However, CMA-ES benefits more from growing the budget to 100xD, and for larger number of function evaluations SMAC-BBOB also requires increasingly large computational resources for building and using its models.", "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a \"black art\" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks." ] }
1707.01428
2725281132
Computer science is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effective training of these models, however, is not trivial due in part to hyperparameters: user-configured values that parametrize learning models and control their ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We demonstrate that our framework scales to 1400 heterogeneous cores and that it achieves a factor of 1.6 speedup in the necessary time to find an optimal set of hyperparameters over a standard distributed hyperparameter optimization framework.
Tuning a neural network architecture for a particular learning problem presents a different set of challenges to standard hyperparameter tuning, including selecting neural network layers and connections between layers. The optimal neural network architecture is typically determined by iteratively building up the network and observing performance on the dataset. Historically, this iterative procedure has been carried out by trial-and-error @cite_0 @cite_41 , in which one parameter from one layer is manually varied at a time. Several automated methods, including construction pruning methods @cite_18 , particle swarm optimization @cite_47 , and genetic algorithms @cite_5 select new layers based on observed performance. Zoph @cite_16 @cite_11 developed a method that chooses optimal neural network models using reinforcement learning, however they report using several hundred GPUs to conduct this search in both published case studies. Neural networks are also being applied on a smaller scale for architecture selection @cite_27 @cite_43 , utilizing a network trained to generate candidate architectures for evaluation.
{ "cite_N": [ "@cite_18", "@cite_41", "@cite_0", "@cite_43", "@cite_27", "@cite_5", "@cite_47", "@cite_16", "@cite_11" ], "mid": [ "2495425901", "", "2115755783", "2464772092", "2748513770", "", "", "2553303224", "2964081807" ], "abstract": [ "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "", "Neural networks currently play a major role in the modeling, control and optimization of polymerization processes and in polymer resin development. This paper is a brief tutorial on simple and practical procedures that can help in selecting and training neural networks and addresses complex cases where the application of neural networks has been successful in the field of polymerization.", "We present new algorithms for adaptively learning artificial neural networks. Our algorithms (AdaNet) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.", "Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks. Our code is available at this https URL", "", "", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset." ] }
1707.01428
2725281132
Computer science is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effective training of these models, however, is not trivial due in part to hyperparameters: user-configured values that parametrize learning models and control their ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We demonstrate that our framework scales to 1400 heterogeneous cores and that it achieves a factor of 1.6 speedup in the necessary time to find an optimal set of hyperparameters over a standard distributed hyperparameter optimization framework.
Existing heuristic scheduling solutions are not well-suited for solving the problem of scheduling a model search or hyperparameter optimization process because they assume that the task space is modeled as a dependency graph. Hyperparameter optimization is best modeled as a BoT application: mapping an effectively infinite set of independent tasks to a finite set of resources. Heuristic scheduling for BoT typically involves monitoring resource utilization @cite_12 @cite_36 to minimize the cost of running the tasks, or else scaling hardware to the set of tasks in the case of elastic cloud services @cite_37 . This work, by contrast, schedules based on properties of the tasks (i.e., model training sessions) themselves to match the available hardware, focusing on larger models with less certainty in their performance.
{ "cite_N": [ "@cite_36", "@cite_37", "@cite_12" ], "mid": [ "2752790924", "2119733057", "2660889552" ], "abstract": [ "Bag-of-Tasks (BoT) applications consisting of multiple tasks widely exist in numerous fields. As customers use cloud resources in a pay-as-you-go way, they are willing to execute BoT applications on clouds. When the private cloud has insufficient available resources to afford all tasks, the cloud provider has to outsource some tasks to public clouds with resource-used costs. The key challenge here is how to schedule tasks on hybrid clouds to minimize makespan given a limited budget. We study and formulate this problem as an Integer Programming problem. Accordingly, we propose an effective heuristic (EH) including two phases (task sequencing and task scheduling). EH uses a Longest Task First method (LTF) to generate a task sequence. A Task Assignment method (TA) is established to schedule all tasks in the obtained sequence one by one. Experimental results demonstrate that the proposed EH outperforms the baseline (RoundRobin) significantly.", "The use of utility on-demand computing infrastructures, such as Amazon's Elastic Clouds [1], is a viable solution to speed lengthy parallel computing problems to those without access to other cluster or grid infrastructures. With a suitable middleware, bag-of-tasks problems could be easily deployed over a pool of virtual computers created on such infrastructures. In bag-of-tasks problems, as there is no communication between tasks, the number of concurrent tasks is allowed to vary over time. In a utility computing infrastructure, if too many virtual computers are created, the speedups are high but may not be cost effective; if too few computers are created, the cost is low but speedups fall below expectations. Without previous knowledge of the processing time of each task, it is difficult to determine how many machines should be created. In this paper, we present an heuristic to optimize the number of machines that should be allocated to process tasks so that for a given budget the speedups are maximal. We have simulated the proposed heuristics against real and theoretical workloads and evaluated the ratios between number of allocated hosts, charged times, speedups and processing times. With the proposed heuristics, it is possible to obtain speedups in line with the number of allocated computers, while being charged approximately the same predefined budget.", "Bag-of-Tasks (BoT) applications consisting of multiple tasks widely exist in numerous fields. As customers use cloud resources in a pay-as-you-go way, they are willing to execute BoT applications on clouds. Cloud providers and customers establish contracts in which applications' due dates are specified. If an application cannot be finished before the due date, the cloud provider should pay a tardiness penalty. When the private cloud has insufficient available resources to afford all customer-submitted BoT applications, the cloud provider has to outsource some tasks to public clouds with resource-used costs. The key challenge here is how to schedule tasks on hybrid clouds to minimize the total cost, including all applications' tardiness penalties and the cost of using public clouds' resources. We study and formulate this problem as an Integer Programming. Accordingly, we propose an effective greedy heuristic (GH) including two phases (task ordering and task scheduling). GH uses an Earlier Latest Start Time First method (ELSTF) for task ordering with the result that a task sequence is obtained. A Task Dispatching method (TD) is established for the task scheduling, in which each task in the obtained task sequence is scheduled one by one. Experimental results demonstrate that the proposed GH outperforms the baseline (RoundRobin) remarkably. ELSTF and TD are also verified to be effective." ] }
1707.01176
2731386152
Portmanteaus are a word formation phenomenon where two words are combined to form a new word. We propose character-level neural sequence-to-sequence (S2S) methods for the task of portmanteau generation that are end-to-end-trainable, language independent, and do not explicitly use additional phonetic information. We propose a noisy-channel-style model, which allows for the incorporation of unsupervised word lists, improving performance over a standard source-to-target model. This model is made possible by an exhaustive candidate generation strategy specifically enabled by the features of the portmanteau task. Experiments find our approach superior to a state-of-the-art FST-based baseline with respect to ground truth accuracy and human evaluation.
generate new words to describe a product given its category and properties. However, their method is limited to hand-crafted rules as compared to our data driven approach. Also, their focus is on brand names. have proposed an approach to recommend brand names based on brand product description. However, they consider only a limited number of features like memorability and readability. devise an approach to generate portmanteaus, which requires user-defined weights for attributes like . Generating a portmanteau from two root words can be viewed as a S2S problem. Recently, neural approaches have been used for S2S problems @cite_2 such as MT. and have shown that character-level neural sequence models work as well as word-level ones for language modelling and MT. propose S2S models for multi-source MT, which have multi-sequence inputs, similar to our case.
{ "cite_N": [ "@cite_2" ], "mid": [ "2550147980" ], "abstract": [ "We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use." ] }
1707.01170
2726628052
We present a novel approach enabling interactive visualization of volumetric Locally Refined B-splines (LR-splines). To this end we propose a highly efficient algorithm for direct visualization of scalar and vector fields given by an LR-spline. For the case of scalar fields a volume rendering approach is designed, along with methods for the necessary adaptive sampling distance, on-the-fly trimming with a surface geometry given by a STereoLithography (STL)-file, and local volume illumination based on on-the-fly evaluation of the derivative of the underlying LR-spline function. For vector fields we design a two-stage algorithm consisting of the computation of stream lines with adaptive step size and their rendering with tubes. In both cases, the common basic ingredient to achieve interactive frame rates is an acceleration structure based on a k-d forest together with suitable data structures. The algorithms are designed to fully utilize modern graphics processing unit (GPU) capabilities. Important applications where LR-spline volumes emerge are given for instance by approximation of large-scale simulation and sensor data, and Isogeometric Analysis (IGA). For the first case -- approximation of large three dimensional point clouds with an associated field -- we provide an extension of the multilevel B-spline approximation (MBA) algorithm to the case of volumetric LR-splines. We showcase interactive rendering achieved by our approach on different representative use cases, stemming from simulations of wind flow around a telescope, Magnetic Resonance (MR) imaging of a human brain, and simulations of a fluidized bed used for mixing and coating particles in industrial processes.
The visualization of a scalar field is commonly done by modeling the scalar field as a participating medium, where a modifiable transfer function specifies how field values are mapped to emitted color and transparency. In the simplest case, where the field consists of discrete samples over a regular grid, an abundance of results is available, see e.g., Levoy @cite_15 for an early example or Engel at al. @cite_0 for an overview. Octrees allow a voxel grid to have different resolutions across the domain, which reduces the amount of data needed for a given scene. This is used in e.g., GigaVoxels @cite_14 , which offers real-time rendering of several billion voxels. GigaVoxels is only effective when there is an a priorily known location of significant regions of empty space.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_14" ], "mid": [ "", "2119231080", "2166103081" ], "abstract": [ "", "The application of volume-rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is discussed. It is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane. Surface-shading calculations are performed at every voxel with local gradient vectors serving as surface normals. In a separate step, surface classification operators are applied to compute a partial opacity of every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are examined. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and few other aliasing artifacts. The use of selective blurring and supersampling to further improve image quality is described. Examples from molecular graphics and medical imaging are given. >", "We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (81923 resolution), of hypertextured meshes (163843 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20--90 fps and respect the limited GPU memory budget." ] }
1707.01068
2730328371
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
There is a recent surge of interest in using deep RL to construct agents that can get high payoffs in multi-agent environments. Much of this literature focuses either on zero-sum environments or coordination games without an incentive to defect and uses self-play to construct agents that can achieve good outcomes. One example that is closer to our problem is @cite_13 which applies deep RL to bargaining, which can be thought of as an imperfect information general-sum game. We show that applying this self-play approach naively does not produce agents that can solve social dilemmas (see the Appendix for more discussion).
{ "cite_N": [ "@cite_13" ], "mid": [ "2625113742" ], "abstract": [ "Much of human dialogue occurs in semi-cooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other's reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available (this https URL)." ] }
1707.01068
2730328371
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
Finally, crandall2017cooperating study how to construct machines for social dilemma games @cite_10 . This work is the closest to our own of the recent literature but differs in that it focuses specifically on cooperation with a human partner in simple games using cheap talk (English) communication from an exiting set of messages. Given the importance of communication in human interactions incorporating explicit signaling into amTFT-like strategies is an important and interesting direction for future work.
{ "cite_N": [ "@cite_10" ], "mid": [ "2604175534" ], "abstract": [ "Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms." ] }
1707.01184
2731450672
Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying the presence of sentiment in the text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is a presence of sentiment in the text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach.While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labeling Facebook posts with their associated sentiments.
Research regarding emotion and mood analysis in text – is becoming more common recently, in part due to the availability of new sources of subjective information on the web. The work of @cite_11 was one of the very first in the area of sentiment classification. They focused on the actual taxonomy and isolation of terms with an emotional connotation.
{ "cite_N": [ "@cite_11" ], "mid": [ "2146459173" ], "abstract": [ "A set of approximately 500 words taken from the literature on emotion was examined. The overall goal was to develop a comprehensive taxonomy of the affective lexicon, with special attention being devoted to the isolation of terms that refer to emotions. Within the taxonomy we propose, the best examples of emotion terms appear to be those that (a) refer to internal, mental conditions as opposed to physical or external ones, (b) are clear cases of stares, and (c) have affect as opposed to behavior or cognition as a predominant (rather than incidental) referential focus. Relaxing one or another of these constraints yields poorer examples or nonexamples of emotions; however, this gradedness is not taken as evidence that emotions necessarily defy classical definition." ] }
1707.01184
2731450672
Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying the presence of sentiment in the text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is a presence of sentiment in the text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach.While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labeling Facebook posts with their associated sentiments.
Identifying the semantic polarity (positive vs. negative connotation) of words has been done using different approaches. Some of the works (knowledge-based) explicitly attempted to find features indicating that subjective language is being used. @cite_1 made use of corpus statistics, @cite_26 used linguistic tools such as WordNet @cite_24 , and @cite_7 used lexicon-based classifier. @cite_6 work on classification of reviews was based on using an unsupervised learning technique. They found the mutual information between document phrases and the words like “excellent” and “poor”. The mutual information was computed using statistics gathered by a search engine. In their work on automatic classification of sentiment in online domains, @cite_15 evaluated the performance of different classifiers on movie reviews. They demonstrated that that standard machine learning techniques outperform human-produced baselines.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_1", "@cite_6", "@cite_24", "@cite_15" ], "mid": [ "1565863475", "2075718943", "", "", "1951269370", "2166706824" ], "abstract": [ "Subjectivity tagging is distinguishing sentences used to present opinions and evaluations from sentences used to objectively present factual information. There are numerous applications for which subjectivity tagging is relevant, including information extraction and information retrieval. This paper identifies strong clues of subjectivity using the results of a method for clustering words according to distributional similarity (Lin 1998), seeded by a small amount of detailed manual annotation. These features are then further refined with the addition of lexical semantic features of adjectives, specifically polarity and gradability (Hatzivassiloglou & McKeown 1997), which can be automatically learned from corpora. In 10-fold cross validation experiments, features based on both similarity clusters and the lexical semantic features are shown to have higher precision than features based on each alone.", "This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted models. This paper demonstrates a new approach, using large-scale real-world knowledge about the inherent affective nature of everyday situations (such as \"getting into a car accident\") to classify sentences into \"basic\" emotion categories. This commonsense approach has new robustness implications.Open Mind Commonsense was used as a real world corpus of 400,000 facts about the everyday world. Four linguistic models are combined for robustness as a society of commonsense-based affect recognition. These models cooperate and compete to classify the affect of text. Such a system that analyzes affective qualities sentence by sentence is of practical value when people want to evaluate the text they are writing. As such, the system is tested in an email writing application. The results suggest that the approach is robust enough to enable plausible affective text user interfaces.", "", "", "Current WordNet-based measures of distance or similarity focus almost exclusively on WordNet’s taxonomic relations. This effectively restricts their applicability to the syntactic categories of noun and verb. We investigate a graph-theoretic model of WordNet’s most important relation—synonymy—and propose measures that determine the semantic orientation of adjectives for three factors of subjective meaning. Evaluation against human judgments shows the effectiveness of the resulting measures.", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging." ] }
1707.01184
2731450672
Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying the presence of sentiment in the text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is a presence of sentiment in the text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach.While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labeling Facebook posts with their associated sentiments.
Typically, methods for sentiment analysis produce lists of words with polarity values assigned to each of them. This method has been successfully employed for applications such as product review analysis and opinion mining @cite_18 @cite_20 @cite_4 @cite_15 @cite_21 @cite_19 @cite_8 . @cite_27 reported high accuracy in classifying emotions in online chat conversations by using the phonemes extracted from a voice-reconstruction of the conversations. @cite_14 investigated discriminating terms for emotion detection in short text while @cite_3 described a system for identifying affect in short fiction stories, using the statistical association level between words in the text and a set of keywords. In another work, @cite_28 used distant supervision to build the corpus.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_28", "@cite_21", "@cite_3", "@cite_19", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "1489003673", "1500704448", "40549020", "38739846", "2144378002", "2148506018", "180236776", "2168625136", "1231377927", "2166706824", "2115023510" ], "abstract": [ "The internet has made it feasible to tap a continuous stream of public sentiment from the world wide web, quite literally permitting one to \"feel the pulse\" of any issue under consideration. We present a methodology for real time sentiment extraction in the domain of finance. With the advent of the web, there has been a sharp increase in the influence of individuals on the stock market via web-based trading and the posting of sentiment to stock message boards. While it is importantto capture this \"sentiment\" of small investors, as yet, no index of sentiment has been compiled. This paper comprises (a) a technology for extracting small investor sentiment from web sources to create an index, and (b) illustrative applications of the methodology. We make use of computerized natural language and statistical algorithms for the automated classification of messages posted on the web. We design a suite of classification algorithms, each of different theoretical content, with a view to characterizing the sentiment of any single posting to a message board. The use of multiple methods allows imposition of voting rules in the classification process. It also enables elimination of \"fuzzy\" messages which are better off uninterpreted. A majority rule across algorithms vastly improves classification accuracy, but also leads to a natural increase in the number of messages classified as \"fuzzy\". The classifier achieves an accuracy of 62 (versus a random classification accuracy of 33 ), and compares favorably against human agreement on message classification, which was 72 . The technology is computationally efficient, allowing the access and interpretations of thousands of messages within minutes. Our illustrative applications show evidence of a strong link between market movements and sentiment. Based on approximately 25,000 messages for the last quarter of 2000, we found evidence that sentiment is based on stock movements.", "We present an empirically verified model of discernable emotions, Watson and Tellegen’s Circumplex Theory of Affect from social and personality psychology, and suggest its usefulness in NLP as a potential model for an automation of an eight-fold categorization of emotions in written English texts. We developed a data collection tool based on the model, collected 287 responses from 110 non-expert informants based on 50 emotional excerpts (min=12, max=348, average=86 words), and analyzed the inter-coder agreement per category and per strength of ratings per subcategory. The respondents achieved an average 70.7 agreement in the most commonly identified emotion categories per text. The categories of high positive affect and pleasantness were most common in our data. Within those categories, the affective terms “enthusiastic”, “active”, “excited”, “pleased”, and “satisfied” had the most consistent ratings of strength of presence in the texts. The textual clues the respondents chose had comparable length and similar key words. Watson and Tellegen’s model appears to be usable as a guide for development of an NLP algorithm for automated identification of emotion in English texts, and the non-expert informants (with college degree and higher) provided sufficient information for future creation of a gold standard of clues per category.", "Microblogging today has become a very popular communication tool among Internet users. Millions of users share opinions on different aspects of life everyday. Therefore microblogging web-sites are rich sources of data for opinion mining and sentiment analysis. Because microblogging has appeared relatively recently, there are a few research works that were devoted to this topic. In our paper, we focus on using Twitter, the most popular microblogging platform, for the task of sentiment analysis. We show how to automatically collect a corpus for sentiment analysis and opinion mining purposes. We perform linguistic analysis of the collected corpus and explain discovered phenomena. Using the corpus, we build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document. Experimental evaluations show that our proposed techniques are efficient and performs better than previously proposed methods. In our research, we worked with English, however, the proposed technique can be used with any other language.", "Opinion mining (OM) is a recent subdiscipline at the crossroads of information retrieval and computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. OM has a rich set of applications, ranging from tracking users’ opinions about products or about political candidates as expressed in online forums, to customer relationship management. In order to aid the extraction of opinions from text, recent research has tried to automatically determine the “PN-polarity” of subjective terms, i.e. identify whether a term that is a marker of opinionated content has a positive or a negative connotation. Research on determining whether a term is indeed a marker of opinionated content (a subjective term) or not (an objective term) has been, instead, much more scarce. In this work we describe SENTIWORDNET, a lexical resource in which each WORDNET synset s is associated to three numerical scoresObj(s), Pos(s) and Neg(s), describing how objective, positive, and negative the terms contained in the synset are. The method used to develop SENTIWORDNET is based on the quantitative analysis of the glosses associated to synsets, and on the use of the resulting vectorial term representations for semi-supervised synset classification. The three scores are derived by combining the results produced by a committee of eight ternary classifiers, all characterized by similar accuracy levels but different classification behaviour. SENTIWORDNET is freely available for research purposes, and is endowed with a Web-based graphical user interface.", "Sentiment Classification seeks to identify a piece of text according to its author's general feeling toward their subject, be it positive or negative. Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic. This paper demonstrates that match with respect to domain and time is also important, and presents preliminary experiments with training data labeled with emoticons, which has the potential of being independent of domain, topic and time.", "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95 , depending on the data) in finding sentiments within Web pages and news articles.", "A door closer including a housing for attachment to the door frame, the housing carrying a rotatably mounted operating arm for attachment to the door, the housing containing spring means which is stressed axially in response to axial movement of a spring seating within the housing, the spring seating being coupled to the pivot for the operating arm through a linkage system.", "The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., \"honest\", \"intrepid\") and negative semantic orientation indicates criticism (e.g., \"disturbing\", \"superfluous\"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8p on the full test set, but the accuracy rises above 95p when the algorithm is allowed to abstain from classifying mild words.", "", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful." ] }
1707.01184
2731450672
Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying the presence of sentiment in the text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is a presence of sentiment in the text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach.While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labeling Facebook posts with their associated sentiments.
Sentiment analysis of social media text has received a lot of interest from the research community in the recent years with the rise to prominence of Facebook and Twitter. @cite_22 used context-dependent sentiment words in their work and @cite_12 suggested combining learning-based and lexicon-based techniques using a centroid classifier. @cite_10 used positive and negative emoticons to classify tweet polarity. They showed that machine learning algorithms (Naive Bayes, Maximum Entropy, and SVM) have accuracy above 80
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_12" ], "mid": [ "2021097538", "1964613733", "1999030760" ], "abstract": [ "Television broadcasters are beginning to combine social micro-blogging systems such as Twitter with television to create social video experiences around events. We looked at one such event, the first U.S. presidential debate in 2008, in conjunction with aggregated ratings of message sentiment from Twitter. We begin to develop an analytical methodology and visual representations that could help a journalist or public affairs person better understand the temporal dynamics of sentiment in reaction to the debate video. We demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "In this work, we propose a novel scheme for sentiment classification (without labeled examples) which combines the strengths of both \"learn-based\" and \"lexicon-based\" approaches as follows: we first use a lexicon-based technique to label a portion of informative examples from given task (or domain); then learn a new supervised classifier based on these labeled ones; finally apply this classifier to the task. The experimental results indicate that proposed scheme could dramatically outperform \"learn-based\" and \"lexicon-based\" techniques." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
It is a classical result that predecessor searches in bounded universes can be performed in time @math . This was first achieved by van Emde Boas trees @cite_4 , and later by @math -fast tries @cite_12 , and Mehlhorn and N "a her @cite_11 . Of these, van Emde Boas trees use @math space, while the other two structures use @math space.
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_11" ], "mid": [ "2152506638", "2090021115", "2089439745" ], "abstract": [ "", "Abstract Let S denote a set of N records whose keys are distinct nonnegative integers less than some initially specified bound M. This paper introduces a new data structure, called the y- fast trie , which uses Θ ( N ) space and Θ (log log M) time for range queries on a random access machine. We will also define a simpler but less efficient structure, called the x- fast trie .", "Abstract In this paper we show how to implement bounded ordered dictionaries, also called bounded priority queues, in O(log log N ) time per operation and O( n ) space. Here n denotes the number of elements stored in the dictionary and N denotes the size of the universe. Previously, this time bound required O( N ) space." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
using @math space @cite_0 . By paying an additional @math factor in the first half of this bound, the space can be improved to @math @cite_0 . P a tra s cu and Thorup later effectively settled this line of research with a set of time-space trade-offs @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "1986633843", "2095681182" ], "abstract": [ "We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unit-cost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved for a large class of problems, including both static and dynamic predecessor problems, in a much stronger communication game model, but they apply to the cell probe and RAM models.", "We develop a new technique for proving cell-probe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication complexity barrier. In addition, our bounds give the first separation between polynomial and near linear space. Such a separation is inherently impossible by communication complexity.Using our lower bound technique and new upper bound constructions, we obtain tight bounds for searching predecessors among a static set of integers. Given a set Y of n integers of l bits each, the goal is to efficiently find PREDECESSOR (x) = max (y ∈ Y | y ≤ x). For this purpose, we represent Y on a RAM with word length b using S ≥ nl bits of space. Defining a = lg S n, we show that the optimal search time is, up to constant factors: min(logbn, lgl-lg n n, lg(l a) lg(a lg n * lg l a), lg (l a) lg (lg (l a) lg (lg n a)).In external memory (b > l), it follows that the optimal strategy is to use either standard B-trees, or a RAM algorithm ignoring the larger block size. In the important case of b = l = γ lg n, for γ > 1 (i.e. polynomial universes), and near linear space (such as S = n • lgO(1) n), the optimal search time is Θ(lg l). Thus, our lower bound implies the surprising conclusion that van Emde Boas' classic data structure from [FOCS'75] is optimal in this case. Note that for space n1+e, a running time of O(lg l lg lg l) was given by Beame and Fich [STOC'99]." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
Departing the bounded universe model for a moment and considering only biased search, perhaps the earliest such data structure is the optimum binary search tree @cite_16 , which is constructed to be the best possible static binary search tree for a given distribution. Optimum binary search trees take a large amount of time to construct; in linear time, however, it is possible to construct a binary search tree that answers queries in time that is within a constant factor of optimal @cite_6 . Even if the distribution is not known in advance, it is still possible to achieve the latter result ( , @cite_2 @cite_5 ).
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_6", "@cite_2" ], "mid": [ "2130055503", "2068373264", "2092885083", "2154937603" ], "abstract": [ "The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n -node splay tree, all the standard search tree operations have an amortized time bound of O (log n ) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying , whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link cut trees.", "", "We discuss two simple strategies for constructing binary search trees: \"Place the most frequently occurring name at the root of the tree, then proceed similary on the subtrees \"and\" choose the root so as to equalize the total weight of the left and right subtrees as much as possible, then proceed similarly on the subtres.\" While the former rule may yield extremely inefficient search trees, the latter rule always produces nearly optimal trees.", "We present a dynamic comparison-based search structure that supports insertions, deletions, and searches within the unified bound. The unified bound specifies that it is quick to access an element that is near a recently accessed element. More precisely, if w(y) distinct elements have been accessed since the last access to element y, and d(x,y) denotes the rank distance between x and y among the current set of elements, then the amortized cost to access element x is O(minylog[w(y)+d(x,y)+2]). This property generalizes the working-set and dynamic-finger properties of splay trees." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
Performing biased searches in a bounded universe is essentially unexplored, except for the case where the elements of @math are drawn from @math rather than the queries @cite_9 . In that result, @math need not be known, but must satisfy certain smoothness constraints, and a data structure is given that supports @math query time with high probability and @math worst-case query time, using @math bits of space, which can be reduced to @math space at the cost of a @math query time (with high probability). It is worth noting that this data structure is also dynamic.
{ "cite_N": [ "@cite_9" ], "mid": [ "1760345700" ], "abstract": [ "We solve the dynamic Predecessor Problem with high probability (whp) in constant time, using only @math bits of memory, for any constant @math . The input keys are random wrt a wider class of the well studied and practically important class of @math -smooth distributions introduced in and:mat . It achieves O(1) whp amortized time. Its worst-case time is @math . Also, we prove whp @math time using only @math bits. Finally, we show whp @math time using O(n) space." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
A related notion is to try to support query times related to the distribution in a less direct way. For example, can be supported in time @math where @math is the number of keys stored between a pointing at a stored key and the query key @cite_3 . There is also a data structure that supports such searches in expected time @math for a wide class of input distributions @cite_15 . Finally, a query time of @math , where @math is the difference between the element queried and the element returned, can also be obtained @cite_13 .
{ "cite_N": [ "@cite_15", "@cite_13", "@cite_3" ], "mid": [ "2114644203", "2177407650", "2149710566" ], "abstract": [ "We present a new finger search tree with O(loglogd) expected search time in the Random Access Machine (RAM) model of computation for a large class of input distributions. The parameter d represents the number of elements (distance) between the search element and an element pointed to by a finger, in a finger search tree that stores n elements. Our data structure improves upon a previous result by Andersson and Mattsson that exhibits expected O(loglogn) search time by incorporating the distance d into the search time complexity, and thus removing the dependence on n. We are also able to show that the search time is O(loglogd+ϕ(n)) with high probability, where ϕ(n) is any slowly growing function of n. For the need of the analysis we model the updates by a “balls and bins” combinatorial game that is interesting in its own right as it involves insertions and deletions of balls according to an unknown distribution.", "Given a bounded universe 0 , 1 , ? , U - 1 , we show how to perform predecessor searches in O ( log log Δ ) expected time, where Δ is the difference between the element being searched for and its predecessor in the structure, while supporting updates in O ( log log Δ ) expected amortized time, as well. This unifies the results of traditional bounded universe structures (which support predecessor searches in O ( log log U ) time) and hashing (which supports membership queries in O ( 1 ) time). We also show how these results can be applied to approximate nearest neighbour queries and range searching.", "We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(slog n log log n) for searching and updating a dynamic set X of n integer keys in linear space. Searching X for an integer y means finding the maximum key in X which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set. The best previous deterministic linear space bound was O(log n log log n) due to Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length W, and the maximal key U Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n." ] }
1707.01182
2335338794
We consider the problem of performing predecessor searches in a bounded universe while achieving query times that depend on the distribution of queries. We obtain several data structures with various properties: in particular, we give data structures that achieve expected query times logarithmic in the entropy of the distribution of queries but with space bounded in terms of universe size, as well as data structures that use only linear space but with query times that are higher (but still sublinear) functions of the entropy. For these structures, the distribution is assumed to be known. We also consider individual query times on universe elements with general weights, as well as the case when the distribution is not known in advance.
Other problems in bounded universes can also be solved in similar ways. A that supports insertion and deletion in time @math , where @math is the difference between the successor and predecessor (in terms of priority) of the query, is known @cite_7 , as well as a data structure for the , wherein the older of two query elements must be determined, that supports query time @math , where @math is the temporal distance between the given elements @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "2136838521", "1977421134" ], "abstract": [ "In this paper we refer to the Temporal Precedence Problem on Pure Pointer Machines. This problem asks for the design of a data structure, maintaining a set of stored elements and supporting the following two operations: insert and precedes. The operation insert(a) introduces a new element a in the structure, while the operation precedes(a, b) returns true iff element a was inserted before element b temporally. In (11) a solution was provided to the problem with worst-case time complexity O(log log n) per operation and O(n log log n) space, where n is the number of elements inserted. It was also demonstrated that the precedes operation has a lower bound of �( log log n) for the Pure Pointer Machine model of computation. In this paper we present two simple solutions with linear space and worst-case constant insertion time. In addition, we describe two algorithms that can handle the precedes(a, b) operation in O(log log d) time, where d is the temporal distance between the elements a and b.", "Many computer algorithms have embedded in them a subalgorithm called a priority queue which produces on demand an element of extreme priority among elements in the queue. Queues on unrestricted priority domains have a running time of Θ(nlogn) for sequences ofn queue operations. We describe a simple priority queue over the priority domain 1,⋯,N in which initialization, insertion, and deletion takeO(loglogD) time, whereD is the difference between the next lowest and next highest priority elements in the queue. In the case of initialization,D=Θ(N). Finding a least element, greatest element, and the neighbor in priority order of some specified element take constant time. We also consider dynamic space allocation for the data structures used. Space can be allocated in blocks of size Θ(N 1 p ), for small integerp." ] }
1707.01217
2770645414
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
Domain adaptation is a popular subject in transfer learning @cite_1 . It concerns covariate shift between two data distributions, usually labeled source data and unlabeled target data. Solutions to domain adaptation problems can be mainly categorized into three types: i). Instance-based methods, which reweight subsample the source samples to match the distribution of the target domain, thus training on the reweighted source samples guarantees classifiers with transferability @cite_26 @cite_18 @cite_8 . ii). Parameter-based methods, which transfer knowledge through shared or regularized parameters of source and target domain learners, or by combining multiple reweighted source learners to form an improved target learner @cite_9 @cite_3 . iii). The last but the most popular and effective methods are feature-based, which can be further categorized into two groups @cite_13 . Asymmetric feature-based methods transform the features of one domain to more closely match another domain @cite_5 @cite_24 @cite_20 while symmetric feature-based methods map different domains to a common latent space where the feature distributions are close.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_24", "@cite_5", "@cite_13", "@cite_20" ], "mid": [ "2118045473", "2112483442", "2008635359", "2122084318", "2165698076", "2312004824", "1869602175", "2009668020", "2395579298", "" ], "abstract": [ "Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of co-training, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance.", "We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.", "Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+, GEMEP-FERA and RU-FACS. STM outperformed generic classifiers in all.", "Recent work has demonstrated the effectiveness of domain adaptation methods for computer vision applications. In this work, we propose a new multiple source domain adaptation method called Domain Selection Machine (DSM) for event recognition in consumer videos by leveraging a large number of loosely labeled web images from different sources (e.g., Flickr.com and Photosig.com), in which there are no labeled consumer videos. Specifically, we first train a set of SVM classifiers (referred to as source classifiers) by using the SIFT features of web images from different source domains. We propose a new parametric target decision function to effectively integrate the static SIFT features from web images video keyframes and the spacetime (ST) features from consumer videos. In order to select the most relevant source domains, we further introduce a new data-dependent regularizer into the objective of Support Vector Regression (SVR) using the ∊-insensitive loss, which enforces the target classifier shares similar decision values on the unlabeled consumer videos with the selected source classifiers. Moreover, we develop an alternating optimization algorithm to iteratively solve the target decision function and a domain selection vector which indicates the most relevant source domains. Extensive experiments on three real-world datasets demonstrate the effectiveness of our proposed method DSM over the state-of-the-art by a performance gain up to 46.41 .", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared . We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.", "We introduce a novel Gaussian process based Bayesian model for asymmetric transfer learning. We adopt a two-layer feed-forward deep Gaussian process as the task learner of source and target domains. The first layer projects the data onto a separate non-linear manifold for each task. We perform knowledge transfer by projecting the target data also onto the source domain and linearly combining its representations on the source and target domain manifolds. Our approach achieves the state-of-the-art in a benchmark real-world image categorization task, and improves on it in cross-tissue tumor detection from histopathology tissue slide images.", "-1We address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce a unified flexible model for both supervised and semi-supervised learning that allows us to learn transformations between domains. Additionally, we present two instantiations of the model, one for general feature adaptation alignment, and one specifically designed for classification. First, we show how to extend metric learning methods for domain adaptation, allowing for learning metrics independent of the domain shift and the final classifier used. Furthermore, we go beyond classical metric learning by extending the method to asymmetric, category independent transformations. Our framework can adapt features even when the target domain does not have any labeled examples for some categories, and when the target and source features have different dimensions. Finally, we develop a joint learning framework for adaptive classifiers, which outperforms competing methods in terms of multi-class accuracy and scalability. We demonstrate the ability of our approach to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types, and codebooks. The experiments show its strong performance compared to previous approaches and its applicability to large-scale scenarios.", "Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments.", "" ] }
1707.01217
2770645414
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
Recently, deep learning has been regarded as a powerful way to learn feature representations for domain adaptation. Symmetric feature-based methods are more widely studied since it can be easily incorporated into deep neural networks @cite_33 @cite_27 @cite_4 @cite_21 @cite_15 @cite_19 . Among symmetric feature-based methods, minimizing the maximum mean discrepancy (MMD) @cite_29 metric is effective to minimize the divergence of two distributions. MMD is a nonparametric metric that measures the distribution divergence between the mean embeddings of two distributions in reproducing kernel Hilbert space (RKHS). The deep domain confusion (DDC) method @cite_10 utilized MMD metric in the last fully connected layer in addition to the regular classification loss to learn representations that are both domain invariant and discriminative. Deep adaptation network (DAN) @cite_4 was proposed to enhance the feature transferability by minimizing multi-kernel MMD in several task-specific layers. On the other hand, correlation alignment (CORAL) method @cite_7 was proposed to align the second-order statistics of the source and target distributions with a linear transformation and @cite_36 extended CORAL and proposed Deep CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_36", "@cite_29", "@cite_21", "@cite_19", "@cite_27", "@cite_15", "@cite_10" ], "mid": [ "2951670162", "2949821452", "2173393671", "2467286621", "2212660284", "1731081199", "2607350342", "2261310161", "2953127297", "1565327149" ], "abstract": [ "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.", "Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \"frustratingly easy\" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple--it can be implemented in four lines of Matlab code--CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.", "Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a \"frustratingly easy\" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.", "We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins.", "Transfer learning has attracted a lot of attention in the past decade. One crucial research issue in transfer learning is how to find a good representation for instances of different domains such that the divergence between domains can be reduced with the new representation. Recently, deep learning has been proposed to learn more robust or higherlevel features for transfer learning. However, to the best of our knowledge, most of the previous approaches neither minimize the difference between domains explicitly nor encode label information in learning the representation. In this paper, we propose a supervised representation learning method based on deep autoencoders for transfer learning. The proposed deep autoencoder consists of two encoding layers: an embedding layer and a label encoding layer. In the embedding layer, the distance in distributions of the embedded instances between the source and target domains is minimized in terms of KL-Divergence. In the label encoding layer, label information of the source domain is encoded using a softmax regression model. Extensive experiments conducted on three real-world image datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.", "The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task." ] }
1707.01217
2770645414
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
Another class of symmetric feature-based methods uses an adversarial objective to reduce domain discrepancy. Motivated by theory in @cite_40 @cite_37 suggesting that a good cross-domain representation contains no discriminative information about the origin (i.e. domain) of the input, domain adversarial neural network (DANN) @cite_30 @cite_21 was proposed to learn domain invariant features by a minimax game between the domain classifier and the feature extractor. In order to back-propagate the gradients computed from the domain classifier, DANN employs a gradient reversal layer (GRL). On the other hand, @cite_6 proposed a general framework for adversarial adaptation by choosing adversarial loss type with respect to the domain classifier and the weight sharing strategy. Our proposed WDGRL can also be viewed as an adversarial adaptation method since it evaluates and minimizes the empirical Wasserstein distance in an adversarial manner. Our WDGRL differs from previous adversarial methods: i). WDGRL adopts an iterative adversarial training strategy, ii). WDGRL adopts Wasserstein distance as the adversarial loss which has gradient superiority.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_21", "@cite_6", "@cite_40" ], "mid": [ "1577269164", "2104094955", "1731081199", "2949987290", "2131953535" ], "abstract": [ "We introduce a new neural network learning algorithm suited to the context of domain adaptation, in which data at training and test time come from similar but different distributions. Our algorithm is inspired by theo ry on domain adaptation suggesting that, for effective domain transfer to be achiev ed, predictions must be made based on a data representation that cannot discriminate between the training (source) and test (target) domains. We propose a training objective that implements this idea in the context of a neural network, whose hidden layer is trained to be predictive of the classification target, but uninformati ve as to the domain of the input. Our experiments on a sentiment analysis classificati on benchmark, where the target data available at the training time is unlabeled, show that our neural network for domain adaption algorithm has better performance than either a standard neural networks and a SVM, trained on input features extracted with the state-ofthe-art marginalized stacked denoising autoencoders of (2012).", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Recent advances in zero-shot learning directly learns a mapping from an image feature space to a semantic space. Among those, SOC @cite_41 maps the image features into the semantic space and then searches the nearest class embedding vector. ALE @cite_77 learns a bilinear compatibility function between the image and the attribute space using ranking loss. DeViSE @cite_21 also learns a linear mapping between image and semantic space using an efficient ranking loss formulation, and it is evaluated on the large-scale ImageNet dataset. SJE @cite_71 optimizes the structural SVM loss to learn the bilinear compatibility. On the other hand, ESZSL @cite_10 uses the square loss to learn the bilinear compatibility and explicitly regularizes the objective w.r.t Frobenius norm. The @math -based objective function of @cite_25 suppresses the noise in the semantic space. @cite_16 embeds visual features into the attribute space, and then learns a metric to improve the consistency of the semantic embedding. Recently, SAE @cite_86 proposed a semantic auto encoder to regularize the model by enforcing the image feature projected to the semantic space to be reconstructed.
{ "cite_N": [ "@cite_41", "@cite_21", "@cite_77", "@cite_71", "@cite_86", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "2150295085", "2123024445", "", "2044913453", "2611632661", "2962830213", "652269744", "2963955958" ], "abstract": [ "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "", "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results.", "Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art.", "This paper addresses the task of zero-shot image classification. The key contribution of the proposed approach is to control the semantic embedding of images – one of the main ingredients of zero-shot learning – by formulating it as a metric learning problem. The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. This results in a novel expression of zero-shot learning not requiring the notion of class in the training phase: only pairs of image attributes, augmented with a consistency indicator, are given as ground truth. At test time, the learned model can predict the consistency of a test image with a given set of attributes, allowing flexible ways to produce recognition inferences. Despite its simplicity, the proposed approach gives state-of-the-art results on four challenging datasets used for zero-shot recognition evaluation.", "Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .", "Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Recent work has pursued this approach by exploring various ways of connecting the visual and text domains. In this paper, we revisit this idea by going further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This observation motivates us to design a simple yet effective zero-shot learning method that is capable of suppressing noise in the text. Specifically, we propose an l2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms those competing methods which rely on online information sources but with no explicit noise suppression. Furthermore, we make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Other zero-shot learning approaches learn non-linear multi-modal embeddings. LatEm @cite_68 extends the bilinear compatibility model of SJE @cite_71 to be a piecewise linear one by learning multiple linear mappings with the selection of which being a latent variable. CMT @cite_20 uses a neural network with two hidden layers to learn a non-linear projection from image feature space to word2vec @cite_69 space. Unlike other works which build their embedding on top of fixed image features, @cite_58 trains a deep convolutional neural networks while learning a visual semantic embedding. Similarly, @cite_9 argues that the visual feature space is more discriminative than the semantic space, thus it proposes an end-to-end deep embedding model which maps semantic features into the visual space. @cite_72 proposes a simple model by projecting class semantic representations into the visual feature space and performing nearest neighbor classifiers among those projected representations. The projection is learned through support vector regressor with visual exemplars of seen classes, i.e. class centroid in the feature space.
{ "cite_N": [ "@cite_69", "@cite_9", "@cite_68", "@cite_72", "@cite_71", "@cite_58", "@cite_20" ], "mid": [ "2153579005", "2552383788", "2334493732", "2962762077", "2044913453", "2962714319", "2124033848" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models.", "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "Leveraging class semantic descriptions and examples of known objects, zero-shot learning makes it possible to train a recognition model for an object class whose examples are not available. In this paper, we propose a novel zero-shot learning model that takes advantage of clustering structures in the semantic embedding space. The key idea is to impose the structural constraint that semantic representations must be predictive of the locations of their corresponding visual exemplars. To this end, this reduces to training multiple kernel-based regressors from semantic representation-exemplar pairs from labeled data of the seen object categories. Despite its simplicity, our approach significantly outperforms existing zero-shot learning methods on standard benchmark datasets, including the ImageNet dataset with more than 20,000 unseen categories.", "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results.", "One of the main challenges in Zero-Shot Learning of visual categories is gathering semantic attributes to accompany images. Recent work has shown that learning from textual descriptions, such as Wikipedia articles, avoids the problem of having to explicitly define these attributes. We present a new model that can classify unseen categories from their textual description. Specifically, we use text features to predict the output weights of both the convolutional and the fully connected layers in a deep convolutional neural network (CNN). We take advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches. The proposed model also allows us to automatically generate a list of pseudo-attributes for each visual category consisting of words from Wikipedia articles. We train our models end-to-end using the Caltech-UCSD bird and flower datasets and evaluate both ROC and Precision-Recall curves. Our empirical results show that the proposed model significantly outperforms previous methods.", "This work introduces a model that can recognize objects in images even if no training data is available for the object class. The only necessary knowledge about unseen visual categories comes from unsupervised text corpora. Unlike previous zero-shot learning models, which can only differentiate between unseen classes, our model can operate on a mixture of seen and unseen classes, simultaneously obtaining state of the art performance on classes with thousands of training images and reasonable performance on unseen classes. This is achieved by seeing the distributions of words in texts as a semantic space for understanding what objects look like. Our deep learning model does not require any manually defined semantic or visual features for either words or images. Images are mapped to be close to semantic word vectors corresponding to their classes, and the resulting image embeddings can be used to distinguish whether an image is of a seen or unseen class. We then use novelty detection methods to differentiate unseen classes from seen classes. We demonstrate two novelty detection strategies; the first gives high accuracy on unseen classes, while the second is conservative in its prediction of novelty and keeps the seen classes' accuracy high." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Embedding both the image and semantic features into another common intermediate space is another direction that zero-shot learning approaches adapt. SSE @cite_63 uses the mixture of seen class proportions as the common space and argues that images belong to the same class should have similar mixture pattern. JLSE @cite_18 maps visual features and semantic features into two separate latent spaces, and measures their similarity by learning another bilinear compatibility function. Furthermore, hybrid models @cite_17 @cite_60 @cite_19 @cite_54 such as @cite_60 jointly embeds multiple text representations and multiple visual parts to ground attributes on different image regions. SYNC @cite_19 constructs the classifiers of unseen classes by taking the linear combinations of base classifiers, which are trained in a discriminative learning framework.
{ "cite_N": [ "@cite_18", "@cite_60", "@cite_54", "@cite_19", "@cite_63", "@cite_17" ], "mid": [ "2964086552", "2328434568", "", "2289084343", "", "1960364170" ], "abstract": [ "In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information (e.g. attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.", "Scaling up visual category recognition to large numbers of classes remains challenging. A promising research direction is zero-shot learning, which does not require any training data to recognize new classes, but rather relies on some form of auxiliary information describing the new classes. Ultimately, this may allow to use textbook knowledge that humans employ to learn about new classes by transferring knowledge from classes they know well. The most successful zero-shot learning approaches currently require a particular type of auxiliary information – namely attribute annotations performed by humans – that is not readily available for most classes. Our goal is to circumvent this bottleneck by substituting such annotations by extracting multiple pieces of information from multiple unstructured text sources readily available on the web. To compensate for the weaker form of auxiliary information, we incorporate stronger supervision in the form of semantic part annotations on the classes from which we transfer knowledge. We achieve our goal by a joint embedding framework that maps multiple text parts as well as multiple semantic parts into a common space. Our results consistently and significantly improve on the state-of-the-art in zero-short recognition and retrieval.", "", "Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.", "", "Object recognition by zero-shot learning (ZSL) aims to recognise objects without seeing any visual examples by learning knowledge transfer between seen and unseen object classes. This is typically achieved by exploring a semantic embedding space such as attribute space or semantic word vector space. In such a space, both seen and unseen class labels, as well as image features can be embedded (projected), and the similarity between them can thus be measured directly. Existing works differ in what embedding space is used and how to project the visual data into the semantic embedding space. Yet, they all measure the similarity in the space using a conventional distance metric (e.g. cosine) that does not consider the rich intrinsic structure, i.e. semantic manifold, of the semantic categories in the embedding space. In this paper we propose to model the semantic manifold in an embedding space using a semantic class label graph. The semantic manifold structure is used to redefine the distance metric in the semantic embedding space for more effective ZSL. The proposed semantic manifold distance is computed using a novel absorbing Markov chain process (AMP), which has a very efficient closed-form solution. The proposed new model improves upon and seamlessly unifies various existing ZSL algorithms. Extensive experiments on both the large scale ImageNet dataset and the widely used Animal with Attribute (AwA) dataset show that our model outperforms significantly the state-of-the-arts." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
While most of zero-shot learning methods learn the cross-modal mapping between the image and class embedding space with discriminative losses, there are a few generative models @cite_22 @cite_39 @cite_75 that represent each class as a probability distribution. GFZSL @cite_22 models each class-conditional distribution as a Gaussian and learns a regression function that maps a class embedding into the latent space. GLaP @cite_39 assumes that each class-conditional distribution follows a Gaussian and generates virtual instances of unseen classes from the learned distribution. @cite_75 learns a multimodal mapping where class and image embeddings of categories are both represented by Gaussian distributions.
{ "cite_N": [ "@cite_75", "@cite_22", "@cite_39" ], "mid": [ "2561940122", "2964307109", "2618752949" ], "abstract": [ "", "We present a simple generative framework for learning to predict previously unseen classes, based on estimating class-attribute-gated class-conditional distributions. We model each class-conditional distribution as an exponential family distribution and the parameters of the distribution of each seen unseen class are defined as functions of the respective observed class attributes. These functions can be learned using only the seen class data and can be used to predict the parameters of the class-conditional distribution of each unseen class. Unlike most existing methods for zero-shot learning that represent classes as fixed embeddings in some vector space, our generative model naturally represents each class as a probability distribution. It is simple to implement and also allows leveraging additional unlabeled data from unseen classes to improve the estimates of their class-conditional distributions using transductive semi-supervised learning. Moreover, it extends seamlessly to few-shot learning by easily updating these distributions when provided with a small number of additional labelled examples from unseen classes. Through a comprehensive set of experiments on several benchmark data sets, we demonstrate the efficacy of our framework.", "Zero-shot learning, which studies the problem of object classification for categories for which we have no training examples, is gaining increasing attention from community. Most existing ZSL methods exploit deterministic transfer learning via an in-between semantic embedding space. In this paper, we try to attack this problem from a generative probabilistic modelling perspective. We assume for any category, the observed representation, e.g. images or texts, is developed from a unique prototype in a latent space, in which the semantic relationship among prototypes is encoded via linear reconstruction. Taking advantage of this assumption, virtual instances of unseen classes can be generated from the corresponding prototype, giving rise to a novel ZSL model which can alleviate the domain shift problem existing in the way of direct transfer learning. Extensive experiments on three benchmark datasets show our proposed model can achieve state-of-the-art results." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
In zero-shot learning, some form of side information is required to share information between classes so that the knowledge learned from seen classes is transfered to unseen classes. One popular form of side information is attributes, i.e. shared and nameable visual properties of objects. However, attributes usually require costly manual annotation. Thus, there has been a large group of studies @cite_56 @cite_59 @cite_6 @cite_81 @cite_31 @cite_84 @cite_42 @cite_51 @cite_24 @cite_3 which exploit other auxiliary information that reduces this annotation effort. @cite_4 does not use side information however it requires one-shot image of the novel class to perform nearest neighbor search with the learned metric. SJE @cite_71 evaluates four different class embeddings including attributes, word2vec @cite_69 , glove @cite_15 and wordnet hierarchy @cite_26 . On ImageNet, @cite_23 leverages the wordnet hierarchy. @cite_79 leverages the rich information of detailed visual descriptions obtained from novice users and improves the performance of attributes obtained from experts. Recently, @cite_33 took a different approach and learned class embeddings using human gaze tracks showing that human gaze is class-specific.
{ "cite_N": [ "@cite_69", "@cite_26", "@cite_4", "@cite_33", "@cite_15", "@cite_42", "@cite_6", "@cite_56", "@cite_84", "@cite_24", "@cite_3", "@cite_81", "@cite_79", "@cite_59", "@cite_71", "@cite_23", "@cite_31", "@cite_51" ], "mid": [ "2153579005", "2081580037", "1499991161", "2407797316", "2250539671", "", "", "1999818274", "", "", "", "", "2398118205", "", "2044913453", "2077071968", "", "" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].", "We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.", "Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have a natural ability to judge class membership. We present a data collection paradigm that involves a discrimination task to increase the information content obtained from gaze data. Our method extracts discriminative descriptors from the data and learns a compatibility function between image and gaze using three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid (GFG) and Gaze Features with Sequence (GFS). We introduce two new gaze-annotated datasets for fine-grained image classification and show that human gaze data is indeed class discriminative, provides a competitive alternative to expert-annotated attributes, and outperforms other baselines for zero-shot image classification.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "", "", "In this paper we aim for zero-shot classification, that is visual recognition of an unseen class by using knowledge transfer from known classes. Our main contribution is COSTA, which exploits co-occurrences of visual concepts in images for knowledge transfer. These inter-dependencies arise naturally between concepts, and are easy to obtain from existing annotations or web-search hit counts. We estimate a classifier for a new label, as a weighted combination of related classes, using the co-occurrences to define the weight. We propose various metrics to leverage these co-occurrences, and a regression model for learning a weight for each related class. We also show that our zero-shot classifiers can serve as priors for few-shot learning. Experiments on three multi-labeled datasets reveal that our proposed zero-shot methods, are approaching and occasionally outperforming fully supervised SVMs. We conclude that co-occurrence statistics suffice for zero-shot classification.", "", "", "", "", "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manuallyencoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "", "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results.", "While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study.", "", "" ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Zero-shot learning has been criticized for being a restrictive set up as it comes with a strong assumption of the image used at prediction time can only come from unseen classes. Therefore, generalized zero-shot learning setting @cite_37 has been proposed to generalize the zero-shot learning task to the case where both seen and unseen classes are used at test time. @cite_55 argues that although ImageNet classification challenge performance has reached beyond human performance, we do not observe similar behavior of the methods that compete at the detection challenge which involves rejecting unknown objects while detecting the position and label of a known object. @cite_21 uses label embeddings to operate on the generalized zero-shot learning setting whereas @cite_74 proposes to learn latent representations for images and classes through coupled linear regression of factorized joint embeddings. On the other hand, @cite_57 introduces a new model layer to the deep net which estimates the probability of an input being from an unknown class and @cite_20 proposes a novelty detection mechanism.
{ "cite_N": [ "@cite_37", "@cite_55", "@cite_21", "@cite_57", "@cite_74", "@cite_20" ], "mid": [ "", "1032927584", "2123024445", "2963149653", "2475168403", "2124033848" ], "abstract": [ "", "The perceived success of recent visual recognition approaches has largely been derived from their performance on classification tasks, where all possible classes are known at training time. But what about open set problems, where unknown classes appear at test time? Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under an assumption of incomplete class knowledge. In this paper, we formulate the problem as one of modeling positive training data at the decision boundary, where we can invoke the statistical extreme value theory. A new algorithm called the P I -SVM is introduced for estimating the unnormalized posterior probability of class inclusion.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "Deep networks have produced significant gains for various visual recognition problems, leading to high impact academic and commercial applications. Recent work in deep networks highlighted that it is easy to generate images that humans would never classify as a particular object class, yet networks classify such images high confidence as that given class – deep network are easily fooled with images humans do not consider meaningful. The closed set nature of deep networks forces them to choose from one of the known classes leading to such artifacts. Recognition in the real world is open set, i.e. the recognition system should reject unknown unseen classes at test time. We present a methodology to adapt deep networks for open set recognition, by introducing a new model layer, OpenMax, which estimates the probability of an input being from an unknown class. A key element of estimating the unknown probability is adapting Meta-Recognition concepts to the activation patterns in the penultimate layer of the network. Open-Max allows rejection of \"fooling\" and unrelated open set images presented to the system, OpenMax greatly reduces the number of obvious errors made by a deep network. We prove that the OpenMax concept provides bounded open space risk, thereby formally providing an open set recognition solution. We evaluate the resulting open set deep networks using pre-trained networks from the Caffe Model-zoo on ImageNet 2012 validation data, and thousands of fooling and open set images. The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities.", "We focus on learning open-vocabulary visual classifiers, which scale up to a large portion of natural language vocabulary (e.g., over tens of thousands of classes). In particular, the training data are large-scale weakly labeled Web images since it is difficult to acquire sufficient well-labeled data at this category scale. In this paper, we propose a novel online learning paradigm towards this challenging task. Different from traditional N-way independent classifiers that generally fail to handle the extremely sparse and inter-related labels, our classifiers learn from continuous label embeddings discovered by collaboratively decomposing the sparse image-label matrix. Leveraging on the structure of the proposed collaborative learning formulation, we develop an efficient online algorithm that can jointly learn the label embeddings and visual classifiers. The algorithm can learn over 30,000 classes of 1,000 training images within 1 second on a standard GPU. Extensively experimental results on four benchmarks demonstrate the effectiveness of our method.", "This work introduces a model that can recognize objects in images even if no training data is available for the object class. The only necessary knowledge about unseen visual categories comes from unsupervised text corpora. Unlike previous zero-shot learning models, which can only differentiate between unseen classes, our model can operate on a mixture of seen and unseen classes, simultaneously obtaining state of the art performance on classes with thousands of training images and reasonable performance on unseen classes. This is achieved by seeing the distributions of words in texts as a semantic space for understanding what objects look like. Our deep learning model does not require any manually defined semantic or visual features for either words or images. Images are mapped to be close to semantic word vectors corresponding to their classes, and the resulting image embeddings can be used to distinguish whether an image is of a seen or unseen class. We then use novelty detection methods to differentiate unseen classes from seen classes. We demonstrate two novelty detection strategies; the first gives high accuracy on unseen classes, while the second is conservative in its prediction of novelty and keeps the seen classes' accuracy high." ] }
1707.00600
2963499153
Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Although zero-shot vs generalized zero-shot learning evaluation works exist @cite_23 @cite_30 in the literature, our work stands out in multiple aspects. For instance, @cite_23 operates on the ImageNet 1K by using 800 classes for training and 200 for test. One of the most comprehensive works, @cite_32 provides a comparison between five methods evaluated on three datasets including ImageNet with three standard splits and proposes a metric to evaluate generalized zero-shot learning performance. On the other hand, we evaluate ten zero-shot learning methods on five datasets with several splits both for zero-shot and generalized zero-shot learning settings, provide statistical significance and robustness tests, and present other valuable insights that emerge from our benchmark. In this sense, ours is the most extensive evaluation of zero-shot and generalized zero-shot learning tasks in the literature.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_23" ], "mid": [ "", "2400717490", "2077071968" ], "abstract": [ "", "We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional zero-shot learning (ZSL) that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this trade-off. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet with more than 20,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper bound on the performance limit of GZSL. In particular, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving ZSL.", "While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study." ] }
1707.00762
2727642071
We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multi-scale model makes predictions over multiple, potentially overlapping multi-symbol tokens. A variation of the byte-pair encoding (BPE) compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of character-level models while maintaining many of the performance benefits of word-level models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
Other approaches use multi-scale RNN architectures. The model in @cite_15 uses both a word-level and character-level RNN, the latter being conditioned on the former. This model too still requires knowledge of word boundaries. The approach in @cite_12 does not require word boundaries, and instead uses the straight-through estimator to learn the latent hierarchical structure directly. Their model does not learn separate embeddings for the segments however, and can only output a single character at a time.
{ "cite_N": [ "@cite_15", "@cite_12" ], "mid": [ "2176796957", "2510842514" ], "abstract": [ "Recurrent neural networks are convenient and efficient models for language modeling. However, when applied on the level of characters instead of words, they suffer from several problems. In order to successfully model long-term dependencies, the hidden representation needs to be large. This in turn implies higher computational costs, which can become prohibitive in practice. We propose two alternative structural modifications to the classical RNN model. The first one consists on conditioning the character level representation on the previous word representation. The other one uses the character history to condition the output probability. We evaluate the performance of the two proposed modifications on challenging, multi-lingual real world data.", "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling." ] }
1707.00762
2727642071
We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multi-scale model makes predictions over multiple, potentially overlapping multi-symbol tokens. A variation of the byte-pair encoding (BPE) compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of character-level models while maintaining many of the performance benefits of word-level models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
The latent sequence decomposition (LSD) model introduced in @cite_22 is related to our multiscale model, and was shown to improve performance on a speech recognition task. Instead of using compression algorithms the LSD model uses a dictionary of all possible @math -grams. Since the number of @math -grams grows exponentially, this limits the the dictionary to very short tokens only. The LSD model uses a regular RNN which is trained on a set of sampled segmentations instead of averaging the hidden states using dynamic programming. This complicates training and makes the likelihood of the model intractable. The recent Gram-CTC model @cite_5 is also related and does use dynamic programming but still uses a dictionary of character n-grams.
{ "cite_N": [ "@cite_5", "@cite_22" ], "mid": [ "2594856242", "2952288254" ], "abstract": [ "Most existing sequence labelling models rely on a fixed decomposition of a target sequence into a sequence of basic units. These methods suffer from two major drawbacks: 1) the set of basic units is fixed, such as the set of words, characters or phonemes in speech recognition, and 2) the decomposition of target sequences is fixed. These drawbacks usually result in sub-optimal performance of modeling sequences. In this pa- per, we extend the popular CTC loss criterion to alleviate these limitations, and propose a new loss function called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically learns the best set of basic units (grams), as well as the most suitable decomposition of tar- get sequences. Unlike CTC, Gram-CTC allows the model to output variable number of characters at each time step, which enables the model to capture longer term dependency and improves the computational efficiency. We demonstrate that the proposed Gram-CTC improves CTC in terms of both performance and efficiency on the large vocabulary speech recognition task at multiple scales of data, and that with Gram-CTC we can outperform the state-of-the-art on a standard speech benchmark.", "We present the Latent Sequence Decompositions (LSD) framework. LSD decomposes sequences with variable lengthed output units as a function of both the input sequence and the output sequence. We present a training algorithm which samples valid extensions and an approximate decoding algorithm. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9 WER compared to a character baseline of 14.8 WER. When combined with a convolutional network on the encoder, we achieve 9.6 WER." ] }
1707.00762
2727642071
We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multi-scale model makes predictions over multiple, potentially overlapping multi-symbol tokens. A variation of the byte-pair encoding (BPE) compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of character-level models while maintaining many of the performance benefits of word-level models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
Although our model is competitive with recent methods such as MI-LSTM @cite_8 and td-LSTM @cite_17 , which achieve 1.44 and 1.63 bits per character on the text8 dataset respectively, other recent models such as HM-LSTM @cite_12 have achieved lower scores (1.29 bpc). Since many of the LSTM variations in the literature can be extended to the multiscale model, we believe it is possible to improve the performance of multiscale models further in the future. Similarly, deeper multi-layer extensions to our model are feasible.
{ "cite_N": [ "@cite_17", "@cite_12", "@cite_8" ], "mid": [ "", "2510842514", "2469894155" ], "abstract": [ "", "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling.", "We introduce a general and simple structural design called Multiplicative Integration (MI) to improve recurrent neural networks (RNNs). MI changes the way in which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters. The new structure can be easily embedded into many popular RNN models, including LSTMs and GRUs. We empirically analyze its learning behaviour and conduct evaluations on several tasks using different RNN models. Our experimental results demonstrate that Multiplicative Integration can provide a substantial performance boost over many of the existing RNN models." ] }
1707.00607
2728749482
Abstract In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries. Instead of the computational domain bounded by four B-spline curves, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including Bezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior Bezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C 1 ∕ G 1 -continuity constraints on the interface of neighboring Bezier patches with respect to each quad in the quadrangulation, the high-quality Bezier patch parameterization is obtained by a local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
: E. @cite_32 proposed the concept of , in which the parameters of CAD models are selected to facilitate isogeometric analysis. @cite_11 showed that the quality of parameterization has a great impact on the analysis results and the efficiency. Pilgerstorfer and J "uttler @cite_44 showed that the condition number of the stiffness matrix, which is a key factor for the stability of the linear system, depends strongly on the quality of the domain parameterization.
{ "cite_N": [ "@cite_44", "@cite_32", "@cite_11" ], "mid": [ "1987671130", "2092680586", "2097230877" ], "abstract": [ "Abstract Isogeometric Analysis (IGA) was introduced by (2005) [1] as a new method to bridge the gap between the geometry description and numerical analysis. Similar to the finite element approach, the IGA concept to solve a partial differential equation leads to a (linear) system of equations. The condition number of the coefficient matrix is a crucial factor for the stability of the system. It depends strongly on the domain parameterization, which provides the isogeometric discretization. In this paper we derive a bound for the condition number of the stiffness matrix of the Poisson equation. In particular, we investigate the influence of the domain parameterization and the knot spacing on the stability of the numerical system. The factors appearing in our bound reflect the stability properties of a given domain parameterization.", "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis.", "In the isogeometric analysis framework, a computational domain is exactly described using the same representation as the one employed in the CAD process. For a CAD object, various computational domains can be constructed with the same shape but with different parameterizations; however one basic requirement is that the resulting parameterization should have no self-intersections. Moreover we will show, with an example of a 3D thermal conduction problem, that different parameterizations of a computational domain have different impacts on the simulation results and efficiency in isogeometric analysis. In this paper, a linear and easy-to-check sufficient condition for the injectivity of a trivariate B-spline parameterization is proposed. For problems with exact solutions, we will describe a shape optimization method to obtain an optimal parameterization of a computational domain. The proposed injective condition is used to check the injectivity of the initial trivariate B-spline parameterization constructed by discrete Coons volume method, which is a generalization of the discrete Coons patch method. Several examples and comparisons are presented to show the effectiveness of the proposed method. During the refinement step, the optimal parameterization can achieve the same accuracy as the initial parameterization but with less degrees of freedom." ] }
1707.00607
2728749482
Abstract In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries. Instead of the computational domain bounded by four B-spline curves, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including Bezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior Bezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C 1 ∕ G 1 -continuity constraints on the interface of neighboring Bezier patches with respect to each quad in the quadrangulation, the high-quality Bezier patch parameterization is obtained by a local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
: Using volumetric harmonic functions, @cite_47 proposed a fitting method for triangular meshes by B-spline parametric volumes. In @cite_6 , a method is proposed to construct trivariate T-spline volumetric parameterizations for genus-zero solids based on an adaptive tetrahedral meshing and mesh untangling technique. proposed a robust and efficient approach to construct injective solid T-splines for genus-zero geometries from a boundary triangulation @cite_36 . proposed a volumetric parameterization method with PHT-splines from the level-set boundary representation @cite_45 . For meshes with arbitrary topology, volumetric parameterization methods are proposed from the Morse theory @cite_39 and Boolean operations @cite_19 .
{ "cite_N": [ "@cite_36", "@cite_6", "@cite_39", "@cite_19", "@cite_45", "@cite_47" ], "mid": [ "2137312224", "1975803044", "2013052821", "2054892990", "2520980191", "2094528257" ], "abstract": [ "Abstract This paper describes a novel method to construct solid rational T-splines for complex genus-zero geometry from boundary surface triangulations. We first build a parametric mapping between the triangulation and the boundary of the parametric domain, a unit cube. After that we adaptively subdivide the cube using an octree subdivision, project the boundary nodes onto the input triangle mesh, and at the same time relocate the interior nodes via mesh smoothing. This process continues until the surface approximation error is less than a pre-defined threshold. T-mesh is then obtained by pillowing the subdivision result one layer on the boundary and its quality is improved. Templates are implemented to handle extraordinary nodes and partial extraordinary nodes in order to get a gap-free T-mesh. The obtained solid T-spline is C 2 -continuous except for the local region around each extraordinary node and partial extraordinary node. The boundary surface of the solid T-spline is C 2 -continuous everywhere except for the local region around the eight nodes corresponding to the eight corners of the parametric cube. Finally, a Bezier extraction technique is used to facilitate T-spline based isogeometric analysis. The obtained Bezier mesh is analysis-suitable with no negative Jacobians. Several examples are presented in this paper to show the robustness of the algorithm.", "Abstract We present a new method to construct a trivariate T-spline representation of complex genus-zero solids for the application of isogeometric analysis. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points sited both on the inner and on the surface of the solid. The distribution of the interpolating points is adapted to the singularities of the domain in order to preserve the features of the surface triangulation.", "A comprehensive scheme is described to construct rational trivariate solid T-splines from boundary triangulations with arbitrary topology. To extract the topology of the input geometry, we first compute a smooth harmonic scalar field defined over the mesh, and saddle points are extracted to determine the topology. By dealing with the saddle points, a polycube whose topology is equivalent to the input geometry is built, and it serves as the parametric domain for the trivariate T-spline. A polycube mapping is then used to build a one-to-one correspondence between the input triangulation and the polycube boundary. After that, we choose the deformed octree subdivision of the polycube as the initial T-mesh, and make it valid through pillowing, quality improvement and applying templates to handle extraordinary nodes and partial extraordinary nodes. The T-spline that is obtained is C^2-continuous everywhere over the boundary surface except for the local region surrounding polycube corner nodes. The efficiency and robustness of the presented technique are demonstrated with several applications in isogeometric analysis.", "In this paper, we present a novel algorithm for constructing a volumetric T-spline from B-reps inspired by constructive solid geometry Boolean operations. By solving a harmonic field with proper boundary conditions, the input surface is automatically decomposed into regions that are classified into two groups represented, topologically, by either a cube or a torus. We perform two Boolean operations (union and difference) with the primitives and convert them into polycubes through parametric mapping. With these polycubes, octree subdivision is carried out to obtain a volumetric T-mesh, and sharp features detected from the input model are also preserved. An optimization is then performed to improve the quality of the volumetric T-spline. The obtained T-spline surface is C 2 everywhere except the local region surrounding irregular nodes, where the surface continuity is elevated from C 0 to G 1. Finally, we extract trivariate Bezier elements from the volumetric T-spline and use them directly in isogeometric analysis.", "Abstract A challenge in isogeometric analysis is constructing analysis-suitable volumetric meshes which can accurately represent the geometry of a given physical domain. In this paper, we propose a method to derive a spline-based representation of a domain of interest from voxel-based data. We show an efficient way to obtain a boundary representation of the domain by a level-set function. Then, we use the geometric information from the boundary (the normal vectors and curvature) to construct a matching C 1 representation with hierarchical cubic splines. The approximation is done by a single template and linear transformations (scaling, translations and rotations) without the need for solving an optimization problem. We illustrate our method with several examples in two and three dimensions, and show good performance on some standard benchmark test problems.", "We present a methodology based on discrete volumetric harmonic functions to parameterize a volumetric model in a way that it can be used to fit a single trivariate B-spline to data so that simulation attributes can also be modeled. The resulting model representation is suitable for isogeometric analysis [Hughes, T.J., Cottrell, J.A., B., Y., 2005. Isogeometric analysis: Cad, finite elements, nurbs, exact geometry, and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194, 4135-4195]. Input data consists of both a closed triangle mesh representing the exterior geometric shape of the object and interior triangle meshes that can represent material attributes or other interior features. The trivariate B-spline geometric and attribute representations are generated from the resulting parameterization, creating trivariate B-spline material property representations over the same parameterization in a way that is related to [Martin, W., Cohen, E., 2001. Representation and extraction of volumetric attributes using trivariate splines. In: Symposium on Solid and Physical Modeling, pp. 234-240] but is suitable for application to a much larger family of shapes and attributes. The technique constructs a B-spline representation with guaranteed quality of approximation to the original data. Then we focus attention on a model of simulation interest, a femur, consisting of hard outer cortical bone and inner trabecular bone. The femur is a reasonably complex object to model with a single trivariate B-spline since the shape overhangs make it impossible to model by sweeping planar slices. The representation is used in an elastostatic isogeometric analysis, demonstrating its ability to suitably represent objects for isogeometric analysis." ] }
1707.00607
2728749482
Abstract In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries. Instead of the computational domain bounded by four B-spline curves, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including Bezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior Bezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C 1 ∕ G 1 -continuity constraints on the interface of neighboring Bezier patches with respect to each quad in the quadrangulation, the high-quality Bezier patch parameterization is obtained by a local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
: A variational approach for constructing NURBS parameterizations of swept volumes is proposed by M. @cite_21 . proposed a constrained optimization framework to construct analysis-suitable volume parameterizations @cite_46 . Spline volume faring is proposed by Pettersen and Skytt to obtain high-quality volume parameterization @cite_4 . The construction of conformal solid T-splines from boundary T-spline representations is studied by using octree structure and boundary offset @cite_2 . In @cite_29 , a variational harmonic method is proposed to construct analysis-suitable parameterizations of computational domains from given CAD boundary information. Wang and Qian proposed an efficient method by combining divide-and-conquer, constraint aggregation and the hierarchical optimization technique to obtain valid trivariate B-spline solids from six boundary B-spline surfaces @cite_12 . Analysis-suitable trivariate NURBS representations of composite panels is constructed with a new curve surface offset algorithm @cite_15 . proposed a two-stage scheme to construct the analysis-suitable NURBS volumetric parameterization by a uniformity-improved boundary reparameterization method @cite_34 . Recently, given a template domain, B-spline based consistent volumetric parameterization is proposed for a set of models with similar semantic features @cite_22 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_15", "@cite_29", "@cite_21", "@cite_2", "@cite_46", "@cite_34", "@cite_12" ], "mid": [ "1418831932", "2963659898", "2084654821", "2109389549", "1491204944", "2055009036", "2144673432", "1980993650", "2079132777" ], "abstract": [ "The use of trivariate NURBS in isogeometric analysis has put the quality of parametrization of NURBS volumes on the agenda. Sometimes a NURBS volume needs a better parametrization to meet requirements regarding smoothness, approximation or periodicity. In this paper we generalize various smoothing methods that already exist for bivariate parametric spline surfaces to trivariate parametric spline volumes. We will also address how rational and polynomial spline volumes create different challenges and solutions in the algorithms.", "Volumetric spline parameterization and computational efficiency are two main challenges in isogeometric analysis (IGA). To tackle this problem, we propose a framework of computation reuse in IGA on a set of three-dimensional models with similar semantic features. Given a template domain, B-spline based consistent volumetric parameterization is first constructed for a set of models with similar semantic features. An efficient quadrature-free method is investigated in our framework to compute the entries of stiffness matrix by Bezier extraction and polynomial approximation. In our approach, evaluation on the stiffness matrix and imposition of the boundary conditions can be pre-computed and reused during IGA on a set of CAD models. Examples with complex geometry are presented to show the effectiveness of our methods, and efficiency similar to the computation in linear finite element analysis can be achieved for IGA taken on a set of models.", "Trivariate NURBS (non-uniform rational B-splines) representation of composite panels which is suitable for three-dimensional isogeometric analysis (IGA) is constructed with a new curve surface offset algorithm. The proposed offset algorithm, which is required by IGA, is non-existent in the CAD literature. Using the presented approach, finite element analysis of composite panels can be performed with the only input being the geometry representation of the composite surface. The method proposed provides a bi-directional system in which one can go forward from CAD to analysis and backwards from analysis to CAD. This is believed to facilitate the design of composite structures. Different parts (patches) can be parametrized independently of each other and glued together, in the finite element solver, by a discontinuous Galerkin method. A stress analysis of curved composite panel with stiffeners is provided to demonstrate the proposed framework.", "In isogeometric analysis, parameterization of computational domain has great effects as mesh generation in finite element analysis. In this paper, based on the concept of harmonic mapping from the computational domain to parametric domain, a variational harmonic approach is proposed to construct analysis-suitable parameterization of computational domain from CAD boundary for 2D and 3D isogeometric applications. Different from the previous elliptic mesh generation method in finite element analysis, the proposed method focuses on isogeometric version, and converts the elliptic PDE into a nonlinear optimization problem, in which a regular term is integrated into the optimization formulation to achieve more uniform and orthogonal iso-parametric structure near convex (concave) parts of the boundary. Several examples are presented to show the efficiency of the proposed method in 2D and 3D isogeometric analysis.", "Isogeometric Analysis uses NURBS representations of the domain for performing numerical simulations. The first part of this paper presents a variational framework for generating NURBS parameterizations of swept volumes. The class of these volumes covers a number of interesting free-form shapes, such as blades of turbines and propellers, ship hulls or wings of airplanes. The second part of the paper reports the results of isogeometric analysis which were obtained with the help of the generated NURBS volume parameterizations. In particular we discuss the influence of the chosen parameterization and the incorporation of boundary conditions.", "To achieve a tight integration of design and analysis, conformal solid T-spline construction with the input boundary spline representation preserved is desirable. However, to the best of our knowledge, this is still an open problem. In this paper, we provide its first solution. The input boundary T-spline surface has genus-zero topology and only contains eight extraordinary nodes, with an isoparametric line connecting each pair. One cube is adopted as the parametric domain for the solid T-spline. Starting from the cube with all the nodes on the input surface as T-junctions, we adaptively subdivide the domain based on the octree structure until each face or edge contains at most one face T-junction or one edge T-junction. Next, we insert two boundary layers between the input T-spline surface and the boundary of the subdivision result. Finally, knot intervals are calculated from the T-mesh and the solid T-spline is constructed. The obtained T-spline is conformal to the input T-spline surface with exactly the same boundary representation and continuity. For the interior region, the continuity is C 2 everywhere except for the local region surrounding irregular nodes. Several examples are presented to demonstrate the performance of the algorithm.", "Parameterization of the computational domain is a key step in isogeometric analysis just as mesh generation is in finite element analysis. In this paper, we study the volume parameterization problem of the multi-block computational domain in an isogeometric version, i.e., how to generate analysis-suitable parameterization of the multi-block computational domain bounded by B-spline surfaces. Firstly, we show how to find good volume parameterization of the single-block computational domain by solving a constraint optimization problem, in which the constraint condition is the injectivity sufficient conditions of B-spline volume parameterization, and the optimization term is the minimization of quadratic energy functions related to the first and second derivatives of B-spline volume parameterization. By using this method, the resulting volume parameterization has no self-intersections, and the isoparametric structure has good uniformity and orthogonality. Then we extend this method to the multi-block case, in which the continuity condition between the neighbor B-spline volumes should be added to the constraint term. The effectiveness of the proposed method is illustrated by several examples based on the three-dimensional heat conduction problem.", "High-quality volumetric parameterization of computational domain plays an important role in three-dimensional isogeometric analysis. Reparameterization technique can improve the distribution of isoparametric curves surfaces without changing the geometry. In this paper, using the reparameterization method, we investigate the high-quality construction of analysis-suitable NURBS volumetric parameterization. Firstly, we introduce the concept of volumetric reparameterization, and propose an optimal Mobius transformation to improve the quality of the isoparametric structure based on a new uniformity metric. Secondly, from given boundary NURBS surfaces, we present a two-stage scheme to construct the analysis-suitable volumetric parameterization: in the first step, uniformity-improved reparameterization is performed on the boundary surfaces to achieve high-quality isoparametric structure without changing the shape; in the second step, from a new variational harmonic metric and the reparameterized boundary surfaces, we construct the optimal inner control points and weights to achieve an analysis-suitable NURBS solid. Several examples with complicated geometry are presented to illustrate the effectiveness of proposed methods.", "In this paper, we present an approach that automatically constructs a trivariate tensor-product B-spline solid via a gradient-based optimization approach. Given six boundary B-spline surfaces for a solid, this approach finds the internal control points so that the resulting trivariate B-spline solid is valid in the sense the minimal Jacobian of the solid is positive. It further minimizes a volumetric functional to improve resulting parametrization quality. For a trivariate B-spline solid even with moderate shape complexity, direct optimization of the Jacobian of the B-spline solid is computationally prohibitive since it would involve thousands of design variables and hundreds of thousands of constraints. We developed several techniques to address this challenge. First, we develop initialization methods that can rapidly generate initial parametrization that are valid or near-valid. We then use a divide-and-conquer approach to partition the large optimization problem into a set of separable sub-problems. For each sub-problem, we group the B-spline coefficients of the Jacobian determinant into different blocks and make one constraint for each block of coefficients. This is achieved by taking an aggregate function, the Kreisselmeier-Steinhauser function value of the elements in each block. With block aggregation, it reduces the dimension of the problem dramatically. In order to further reduce the computing time at each iteration, a hierarchical optimization approach is used where the input boundary surfaces are coarsened to difference levels. We optimize the distribution of internal control points for the coarse representation first, then use the result as initial parametrization for optimization at the next level. The resulting parametrization can then be further optimized to improve the mesh quality. Optimized trivariate parametrization from various boundary surfaces and the corresponding parametrization metric are given to illustrate the effectiveness of the approach." ] }
1707.00811
2952450804
Fine-Grained Visual Categorization (FGVC) has achieved significant progress recently. However, the number of fine-grained species could be huge and dynamically increasing in real scenarios, making it difficult to recognize unseen objects under the current FGVC framework. This raises an open issue to perform large-scale fine-grained identification without a complete training set. Aiming to conquer this issue, we propose a retrieval task named One-Shot Fine-Grained Instance Retrieval (OSFGIR). "One-Shot" denotes the ability of identifying unseen objects through a fine-grained retrieval task assisted with an incomplete auxiliary training set. This paper first presents the detailed description to OSFGIR task and our collected OSFGIR-378K dataset. Next, we propose the Convolutional and Normalization Networks (CN-Nets) learned on the auxiliary dataset to generate a concise and discriminative representation. Finally, we present a coarse-to-fine retrieval framework consisting of three components, i.e., coarse retrieval, fine-grained retrieval, and query expansion, respectively. The framework progressively retrieves images with similar semantics, and performs fine-grained identification. Experiments show our OSFGIR framework achieves significantly better accuracy and efficiency than existing FGVC and image retrieval methods, thus could be a better solution for large-scale fine-grained object identification.
CNN has exhibited promising performance for various vision tasks. Several works have attempted to apply CNN in image and instance retrieval @cite_42 @cite_38 @cite_17 @cite_26 @cite_36 @cite_37 @cite_30 . NeuralCode @cite_38 is an early work that applies CNN for image retrieval, Babenko employ the output of fully-connected layer as image feature for retrieval. Since Vector Locally Aggregated Descriptors (VLAD) @cite_40 shows good retrieval performance by encoding SIFT descriptors. Ng @cite_17 replace the SIFT with CNN feature and encode the convolutional feature maps into a global feature with VLAD. @cite_30 , Tolias demonstrate that simply applying a spatial max-pooling over all locations on convolutional feature maps produces an effective visual descriptor. Instance retrieval differs slightly from image retrieval, because it focuses on image regions containing the target object, rather than the entire image. Given the object bounding boxes of query images, Tolias @cite_30 propose approximate integral max-pooling to select the best matching bounding box from hundreds of candidates. Different from @cite_30 , Salvador @cite_42 and Gordo @cite_36 apply Faster R-CNN @cite_7 to reduce the number of candidate proposals.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_26", "@cite_7", "@cite_36", "@cite_42", "@cite_40", "@cite_17" ], "mid": [ "", "204268067", "2951136860", "1524680991", "2953106684", "2340690086", "2342601131", "1984309565", "2949266290" ], "abstract": [ "", "It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time.", "We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results.", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.", "Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results.", "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.", "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks." ] }
1707.00811
2952450804
Fine-Grained Visual Categorization (FGVC) has achieved significant progress recently. However, the number of fine-grained species could be huge and dynamically increasing in real scenarios, making it difficult to recognize unseen objects under the current FGVC framework. This raises an open issue to perform large-scale fine-grained identification without a complete training set. Aiming to conquer this issue, we propose a retrieval task named One-Shot Fine-Grained Instance Retrieval (OSFGIR). "One-Shot" denotes the ability of identifying unseen objects through a fine-grained retrieval task assisted with an incomplete auxiliary training set. This paper first presents the detailed description to OSFGIR task and our collected OSFGIR-378K dataset. Next, we propose the Convolutional and Normalization Networks (CN-Nets) learned on the auxiliary dataset to generate a concise and discriminative representation. Finally, we present a coarse-to-fine retrieval framework consisting of three components, i.e., coarse retrieval, fine-grained retrieval, and query expansion, respectively. The framework progressively retrieves images with similar semantics, and performs fine-grained identification. Experiments show our OSFGIR framework achieves significantly better accuracy and efficiency than existing FGVC and image retrieval methods, thus could be a better solution for large-scale fine-grained object identification.
OSFGIR differs from FGVC because it is a retrieval task, thus is able to query and identify the unseen query object. OSFGIR is also different from most of the visual retrieval tasks because it needs to further identify and capture the subtle differences among visually and semantically similar objects. Among recent visual retrieval methods, Gordo @cite_36 have achieve promising performance. However, the method in @cite_36 is not suitable for OSFGIR because: 1) it works on partial-duplicate image retrieval and is evaluated on the widely-used Oxford5K @cite_21 and Holidays @cite_48 . OSFGIR aims to return images containing the identical fine-grained specie in the query. Those two problems are quite different. 2) The deep regional feature training in @cite_36 involves keypoint matching to generate the bounding boxes for each candidate object, thus is more suited to partial-duplicate image search.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_21" ], "mid": [ "2340690086", "2433935537", "2141362318" ], "abstract": [ "We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.", "This technical report presents and extends a recent paper we have proposed for large scale image search. State-of-the-art methods build on the bag-of- features image representation. We first analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.", "In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora." ] }
1707.00852
2731952176
Efficient flexibility and higher system scalability call for enhanced network performance, better energy consumption, lower infrastructure cost, and effective resource utilization. To accomplish this, an architectural optimization and reconstruction of existing cellular network is required. Network slicing is considered to be one of the key enablers and an architectural answer of communication system of 2020 and beyond. Traditional mobile operators provide all types of services to various kinds of customers through a single network, however, with the deployment of network slicing operators are now able to divide entire network into different slices each with its own configuration and specific Quality of Service (QoS) requirements. In a slice-based network, each slice will be considered as a separate logical network. In this way, the infrastructure utilization and resource allocation will be much more energy and cost efficient in comparison to traditional network. In this paper, we provided a comprehensive discussion on concept and system architecture of network slicing with particular focus on its business aspect and profit modeling. We throughly discussed two different dimensions of profit modeling, so called Own-Slice Implementation and Resource Leasing for Outsourced Slices. We further addressed open research directions and existing challenges with the purpose of motivating new advances and adding realistic solutions to this emerging technology.
The concept of network slicing has exclusively been studied in literature. For instance, authors in @cite_7 explained a detailed end-to-end framework of network slicing implementation in 5G communication system. The paper deals with deployment of vertical and horizontal slicing over the air-interface, Radio Access Network (RAN), and Core Network (CN). It further focuses on how to horizontally slice both computation and communication resources to from virtual computation platforms in order to improve scalability, enhance device capability, and increase end user experience.
{ "cite_N": [ "@cite_7" ], "mid": [ "2496781068" ], "abstract": [ "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond." ] }
1707.00852
2731952176
Efficient flexibility and higher system scalability call for enhanced network performance, better energy consumption, lower infrastructure cost, and effective resource utilization. To accomplish this, an architectural optimization and reconstruction of existing cellular network is required. Network slicing is considered to be one of the key enablers and an architectural answer of communication system of 2020 and beyond. Traditional mobile operators provide all types of services to various kinds of customers through a single network, however, with the deployment of network slicing operators are now able to divide entire network into different slices each with its own configuration and specific Quality of Service (QoS) requirements. In a slice-based network, each slice will be considered as a separate logical network. In this way, the infrastructure utilization and resource allocation will be much more energy and cost efficient in comparison to traditional network. In this paper, we provided a comprehensive discussion on concept and system architecture of network slicing with particular focus on its business aspect and profit modeling. We throughly discussed two different dimensions of profit modeling, so called Own-Slice Implementation and Resource Leasing for Outsourced Slices. We further addressed open research directions and existing challenges with the purpose of motivating new advances and adding realistic solutions to this emerging technology.
Moreover, @cite_1 , @cite_6 , and @cite_0 focused on deployment of network slicing over the RAN architecture of mobile networks. Authors in @cite_1 analyzed network slicing in multi-cell RAN in order to support radio resource splitting among various slices. This paper further proposed four types of RAN slicing approaches along with their detailed comparison. However, authors in @cite_6 explained how network slicing may impact various aspects of design and functions of RAN architecture of 5G mobile networks. The paper thoroughly covered RAN requirements for network slicing implementation. In @cite_0 , authors provided a comprehensive discussion on deployment of network slicing in heterogeneous Cloud-RAN (C-RAN) in order to improve throughput through computation and communication resource sharing.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_6" ], "mid": [ "2601022114", "2605289347", "2513616575" ], "abstract": [ "Research on network slicing for multi-tenant heterogeneous cloud radio access networks (H-CRANs) is still in its infancy. In this paper, we redefine network slicing and propose a new network slicing framework for multi-tenant H-CRANs. In particular, the network slicing process is formulated as a weighted throughput maximization problem that involves sharing of computational resources, fronthaul capacity, physical remote radio heads and radio resources. The problem is then jointly solved using a sub-optimal greedy approach and a dual decomposition method. Simulation results demonstrate that the framework can flexibly scale the throughput performance of multiple tenants according to the user priority weights associated with the tenants.", "Network slicing is a fundamental capability for future 5G networks to properly support current and envisioned future application scenarios. Network slicing facilitates a cost-effective deployment and operation of multiple logical networks over a common physical network infrastructure such that each network is customized to best serve the needs of specific applications (e.g., mobile broadband, Internet of Things applications) and or communications service providers (e.g., special purpose service providers for different sectors such as public safety, utilities, smart city, and automobiles). Slicing a RAN becomes particularly challenging due to the inherently shared nature of the radio channel and the potential influence that any transmitter may have on any receiver. In this respect, this article analyzes the RAN slicing problem in a multi-cell network in relation to the RRM functionalities that can be used as a support for splitting the radio resources among the RAN slices. Four different RAN slicing approaches are presented and compared from different perspectives, such as the granularity in the assignment of radio resources and the degrees of isolation and customization.", "Network slicing addresses the deployment of multiple logical networks as independent business operations on a common physical infrastructure. The concept has initially been proposed for the 5 th Generation (5G) core network (CN) however, it has not been investigated yet what network slicing would represent to the design of the 5G radio access network (RAN). The paper explains how network slicing may impact several aspects of the 5G RAN design such as the protocol architecture, the design of network functions (NFs) and the management framework that needs to support both the management of the infrastructure to be shared among the slices and the slice operation." ] }
1707.00852
2731952176
Efficient flexibility and higher system scalability call for enhanced network performance, better energy consumption, lower infrastructure cost, and effective resource utilization. To accomplish this, an architectural optimization and reconstruction of existing cellular network is required. Network slicing is considered to be one of the key enablers and an architectural answer of communication system of 2020 and beyond. Traditional mobile operators provide all types of services to various kinds of customers through a single network, however, with the deployment of network slicing operators are now able to divide entire network into different slices each with its own configuration and specific Quality of Service (QoS) requirements. In a slice-based network, each slice will be considered as a separate logical network. In this way, the infrastructure utilization and resource allocation will be much more energy and cost efficient in comparison to traditional network. In this paper, we provided a comprehensive discussion on concept and system architecture of network slicing with particular focus on its business aspect and profit modeling. We throughly discussed two different dimensions of profit modeling, so called Own-Slice Implementation and Resource Leasing for Outsourced Slices. We further addressed open research directions and existing challenges with the purpose of motivating new advances and adding realistic solutions to this emerging technology.
Further, authors in @cite_3 addressed network slicing related concepts i.e. resource allocation, virtualization technologies, orchestration process, and isolation function. The paper provided a comprehensive discussion on Software Defined Networking (SDN) and Network Function Virtualization (NFV) along with a deployment use-case (which considers network slicing using both NFV and SDN integration). The authors further demonstrated existing challenges and future research directions of network slicing implementation in 5G communication system. Moreover, a comprehensive survey on architecture and further research directions of network slicing is also available in @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2612074600", "2597003067" ], "abstract": [ "5G is envisioned to be a multi-service network supporting a wide range of verticals with a diverse set of performance and service requirements. Slicing a single physical network into multiple isolated logical networks has emerged as a key to realizing this vision. This article is meant to act as a survey, the first to the authors� knowledge, on this topic of prime interest. We begin by reviewing the state of the art in 5G network slicing and present a framework for bringing together and discussing existing work in a holistic manner. Using this framework, we evaluate the maturity of current proposals and identify a number of open research questions.", "The fifth generation of mobile communications is anticipated to open up innovation opportunities for new industries such as vertical markets. However, these verticals originate myriad use cases with diverging requirements that future 5G networks have to efficiently support. Network slicing may be a natural solution to simultaneously accommodate, over a common network infrastructure, the wide range of services that vertical- specific use cases will demand. In this article, we present the network slicing concept, with a particular focus on its application to 5G systems. We start by summarizing the key aspects that enable the realization of so-called network slices. Then we give a brief overview on the SDN architecture proposed by the ONF and show that it provides tools to support slicing. We argue that although such architecture paves the way for network slicing implementation, it lacks some essential capabilities that can be supplied by NFV. Hence, we analyze a proposal from ETSI to incorporate the capabilities of SDN into the NFV architecture. Additionally, we present an example scenario that combines SDN and NFV technologies to address the realization of network slices. Finally, we summarize the open research issues with the purpose of motivating new advances in this field." ] }
1707.00665
2733044207
We present an automatic method to describe clinically useful information about scanning, and to guide image interpretation in ultrasound (US) videos of the fet al heart. Our method is able to jointly predict the visibility, viewing plane, location and orientation of the fet al heart at the frame level. The contributions of the paper are three-fold: (i) a convolutional neural network architecture is developed for a multi-task prediction, which is computed by sliding a 3x3 window spatially through convolutional maps. (ii) an anchor mechanism and Intersection over Union (IoU) loss are applied for improving localization accuracy. (iii) a recurrent architecture is designed to recursively compute regional convolutional features temporally over sequential frames, allowing each prediction to be conditioned on the whole video. This results in a spatial-temporal model that precisely describes detailed heart parameters in challenging US videos. We report results on a real-world clinical dataset, where our method achieves performance on par with expert annotations.
Automatic methods for US video analysis have been developed. For example, Kwitt @cite_2 applied kernel dynamic texture models for labelling individual frames in an US video. This concept was extended to handle multiple object classes on real-world clinical US videos in @cite_4 . Recently, CNNs have been applied to image-level classification of anatomical structures with transfer learning @cite_7 and recurrent models @cite_9 . These works are all related to frame-level classification of US videos. Our application differs from them by focusing on describing details of the fet al heart, which is a more complicated multi-task application.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7", "@cite_2" ], "mid": [ "2294318823", "2569161580", "2430776200", "2063540402" ], "abstract": [ "Accurate acquisition of fet al ultrasound US standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fet al anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network T-RNN, which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.", "Abstract Confirmation of pregnancy viability (presence of fet al cardiac activity) and diagnosis of fet al presentation (head or buttock in the maternal pelvis) are the first essential components of ultrasound assessment in obstetrics. The former is useful in assessing the presence of an on-going pregnancy and the latter is essential for labour management. We propose an automated framework for detection of fet al presentation and heartbeat from a predefined free-hand ultrasound sweep of the maternal abdomen. Our method exploits the presence of key anatomical sonographic image patterns in carefully designed scanning protocols to develop, for the first time, an automated framework allowing novice sonographers to detect fet al breech presentation and heartbeat from an ultrasound sweep. The framework consists of a classification regime for a frame by frame categorization of each 2D slice of the video. The classification scores are then regularized through a conditional random field model, taking into account the temporal relationship between the video frames. Subsequently, if consecutive frames of the fet al heart are detected, a kernelized linear dynamical model is used to identify whether a heartbeat can be detected in the sequence. In a dataset of 323 predefined free-hand videos, covering the mother’s abdomen in a straight sweep, the fet al skull, abdomen, and heart were detected with a mean classification accuracy of 83.4 . Furthermore, for the detection of the heartbeat an overall classification accuracy of 93.1 was achieved.", "We address the task of object recognition in obstetric ultrasound videos using deep Convolutional Neural Networks (CNNs). A transfer learning based design is presented to study the transferability of features learnt from natural images to ultrasound image object recognition which on the surface is a very different problem. Our results demonstrate that CNNs initialised with large-scale pre-trained networks outperform those directly learnt from small-scale ultrasound data (91.5 versus 87.9 ), in terms of object identification.", "The problem of localizing specific anatomic structures using ultrasound (US) video is considered. This involves automatically determining when an US probe is acquiring images of a previously defined object of interest, during the course of an US examination. Localization using US is motivated by the increased availability of portable, low-cost US probes, which inspire applications where inexperienced personnel and even first-time users acquire US data that is then sent to experts for further assessment. This process is of particular interest for routine examinations in underserved populations as well as for patient triage after natural disasters and large-scale accidents, where experts may be in short supply. The proposed localization approach is motivated by research in the area of dynamic texture analysis and leverages several recent advances in the field of activity recognition. For evaluation, we introduce an annotated and publicly available database of US video, acquired on three phantoms. Several experiments reveal the challenges of applying video analysis approaches to US images and demonstrate that good localization performance is possible with the proposed solution." ] }
1707.00665
2733044207
We present an automatic method to describe clinically useful information about scanning, and to guide image interpretation in ultrasound (US) videos of the fet al heart. Our method is able to jointly predict the visibility, viewing plane, location and orientation of the fet al heart at the frame level. The contributions of the paper are three-fold: (i) a convolutional neural network architecture is developed for a multi-task prediction, which is computed by sliding a 3x3 window spatially through convolutional maps. (ii) an anchor mechanism and Intersection over Union (IoU) loss are applied for improving localization accuracy. (iii) a recurrent architecture is designed to recursively compute regional convolutional features temporally over sequential frames, allowing each prediction to be conditioned on the whole video. This results in a spatial-temporal model that precisely describes detailed heart parameters in challenging US videos. We report results on a real-world clinical dataset, where our method achieves performance on par with expert annotations.
The most closely related work to this paper, is that of Bridge @cite_8 , where a number of key parameters related to the fet al heart are estimated. Hand-crafted features were used with classification forests to distinguish different view planes of the fet al heart. Then a multi-variable prediction was formulated that used a CRF-filter. We adress a similar task, but propose a significantly different approach that builds on recent advances in deep learning. Firstly, this allows it to leverage deep, learned feature representations that are shared by all tasks, rather than relying on hand-crafted features. Secondly, by including a recurrent part, it is possible to train our model end-to-end, whereas in @cite_8 the classification and regression parts are trained separately from the temporal filter.
{ "cite_N": [ "@cite_8" ], "mid": [ "2555486686" ], "abstract": [ "Abstract Interpretation of ultrasound videos of the fet al heart is crucial for the antenatal diagnosis of congenital heart disease (CHD). We believe that automated image analysis techniques could make an important contribution towards improving CHD detection rates. However, to our knowledge, no previous work has been done in this area. With this goal in mind, this paper presents a framework for tracking the key variables that describe the content of each frame of freehand 2D ultrasound scanning videos of the healthy fet al heart. This represents an important first step towards developing tools that can assist with CHD detection in abnormal cases. We argue that it is natural to approach this as a sequential Bayesian filtering problem, due to the strong prior model we have of the underlying anatomy, and the ambiguity of the appearance of structures in ultrasound images. We train classification and regression forests to predict the visibility, location and orientation of the fet al heart in the image, and the viewing plane label from each frame. We also develop a novel adaptation of regression forests for circular variables to deal with the prediction of cardiac phase. Using a particle-filtering-based method to combine predictions from multiple video frames, we demonstrate how to filter this information to give a temporally consistent output at real-time speeds. We present results on a challenging dataset gathered in a real-world clinical setting and compare to expert annotations, achieving similar levels of accuracy to the levels of inter- and intra-observer variation." ] }
1707.00102
2728013417
When devising a course of treatment for a patient, doctors often have little quantitative evidence on which to base their decisions, beyond their medical education and published clinical trials. Stanford Health Care alone has millions of electronic medical records (EMRs) that are only just recently being leveraged to inform better treatment recommendations. These data present a unique challenge because they are high-dimensional and observational. Our goal is to make personalized treatment recommendations based on the outcomes for past patients similar to a new patient. We propose and analyze three methods for estimating heterogeneous treatment effects using observational data. Our methods perform well in simulations using a wide variety of treatment effect functions, and we present results of applying the two most promising methods to data from The SPRINT Data Analysis Challenge, from a large randomized trial of a treatment for high blood pressure.
Identifying subgroups within the patient population is becoming especially challenging in high-dimensional data, as in EMRs. In recent years, a great amount of work has been done to apply methods from machine learning to enable the data to inform what are the important subgroups in terms of treatment effect. @cite_7 proposed interaction trees for adaptively defining subgroups based on treatment effect. @cite_15 proposed causal trees, which are similar, and constructed valid confidence intervals.
{ "cite_N": [ "@cite_15", "@cite_7" ], "mid": [ "2150291618", "2145878099" ], "abstract": [ "Abstract : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)", "Subgroup analysis is an integral part of comparative analysis where assessing the treatment effect on a response is of central interest. Its goal is to determine the heterogeneity of the treatment effect across subpopulations. In this paper, we adapt the idea of recursive partitioning and introduce an interaction tree (IT) procedure to conduct subgroup analysis. The IT procedure automatically facilitates a number of objectively defined subgroups, in some of which the treatment effect is found prominent while in others the treatment has a negligible or even negative effect. The standard CART (, 1984) methodology is inherited to construct the tree structure. Also, in order to extract factors that contribute to the heterogeneity of the treatment effect, variable importance measure is made available via random forests of the interaction trees. Both simulated experiments and analysis of census wage data are presented for illustration." ] }
1707.00102
2728013417
When devising a course of treatment for a patient, doctors often have little quantitative evidence on which to base their decisions, beyond their medical education and published clinical trials. Stanford Health Care alone has millions of electronic medical records (EMRs) that are only just recently being leveraged to inform better treatment recommendations. These data present a unique challenge because they are high-dimensional and observational. Our goal is to make personalized treatment recommendations based on the outcomes for past patients similar to a new patient. We propose and analyze three methods for estimating heterogeneous treatment effects using observational data. Our methods perform well in simulations using a wide variety of treatment effect functions, and we present results of applying the two most promising methods to data from The SPRINT Data Analysis Challenge, from a large randomized trial of a treatment for high blood pressure.
@cite_2 improved on this line of work by growing random forests from causal trees. These tree-based methods all use shared-basis conditional mean regression in the framework of . An example of a transformed-outcome estimator is the FindIt method of @cite_1 which trains an adapted support vector machine on a transformed binary outcome. @cite_3 introduced a simple linear model based on transformed covariates and show that it is equivalent to transformed outcome regression in the Gaussian case. In a novel approach, @cite_12 used outcome weighted learning to directly determine individualized treatment rules, skipping the step of estimating individualized treatment effects. The problem of estimating heterogeneous treatment effects has also received significant attention in Bayesian literature. @cite_0 and @cite_4 approached the problem using Bayesian additive regression trees , and @cite_13 proposed a method based on Bayesian forests. @cite_16 developed a Bayesian method for finding qualitative interactions between treatment and covariates, and there are other Bayesian methods for flexible nonlinear modelling of interactive non-additive relationships between covariates and response .
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "", "2132917208", "2140435154", "2151832869", "2157395790", "2163681216", "", "1994857208" ], "abstract": [ "", "A discussion of matching, randomization, random sampling, and other methods of controlling extraneous variation is presented. The objective is to specify the benefits of randomization in estimating causal effects of treatments. The basic conclusion is that randomization should be employed whenever possible but that the use of carefully controlled nonrandomized data to estimate causal effects is a reasonable and necessary procedure in many cases. Recent psychological and educational literature has included extensive criticism of the use of nonrandomized studies to estimate causal effects of treatments (e.g., Campbell & Erlebacher, 1970). The implication in much of this literature is that only properly randomized experiments can lead to useful estimates of causal effects. If taken as applying to all fields of study, this position is untenable. Since the extensive use of randomized experiments is limited to the last half century,8 and in fact is not used in much scientific investigation today,4 one is led to the conclusion that most scientific \"truths\" have been established without using randomized experiments. In addition, most of us successfully determine the causal effects of many of our everyday actions, even interpersonal behaviors, without the benefit of randomization. Even if the position that causal effects of treatments can only be well established from randomized experiments is taken as applying only to the social sciences in which", "We consider a setting in which we have a treatment and a potentially large number of covariates for a set of observations, and wish to model their relationship with an outcome of interest. We propose a simple method for modeling interactions between the treatment and covariates. The idea is to modify the covariate in a simple way, and then fit a standard model using the modified covariates and no main effects. We show that coupled with an efficiency augmentation procedure, this method produces clinically meaningful estimators in a variety of settings. It can be useful for practicing personalized medicine: determining from a large set of biomarkers, the subset of patients that can potentially benefit from a treatment. We apply the method to both simulated datasets and real trial data. The modified covariates idea can be used for other purposes, for example, large scale hypothesis testing for determining which of a set of covariates interact with a treatment variable. Supplementary materials for this article are available online.", "Abstract In this article we put forward a Bayesian approach for finding classification and regression tree (CART) models. The two basic components of this approach consist of prior specification and stochastic search. The basic idea is to have the prior induce a posterior distribution that will guide the stochastic search toward more promising CART models. As the search proceeds, such models can then be selected with a variety of criteria, such as posterior probability, marginal likelihood, residual sum of squares or misclassification rates. Examples are used to illustrate the potential superiority of this approach over alternative methods.", "We introduce the C++ application and R package ranger. The software is a fast implementation of random forests for high dimensional data. Ensembles of classification, regression and survival trees are supported. We describe the implementation, provide examples, validate the package with a reference implementation, and compare runtime and memory usage with other implementations. The new software proves to scale best with the number of features, samples, trees, and features tried for splitting. Finally, we show that ranger is the fastest and most memory efficient implementation of random forests to analyze data on the scale of a genome-wide association study.", "Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A. Copyright © 2012 John Wiley & Sons, Ltd.", "", "There is increasing interest in discovering individualized treatment rules (ITRs) for patients who have heterogeneous responses to treatment. In particular, one aims to find an optimal ITR that is a deterministic function of patient-specific characteristics maximizing expected clinical outcome. In this article, we first show that estimating such an optimal treatment rule is equivalent to a classification problem where each subject is weighted proportional to his or her clinical outcome. We then propose an outcome weighted learning approach based on the support vector machine framework. We show that the resulting estimator of the treatment rule is consistent. We further obtain a finite sample bound for the difference between the expected outcome using the estimated ITR and that of the optimal treatment rule. The performance of the proposed approach is demonstrated via simulation studies and an analysis of chronic depression data." ] }
1707.00102
2728013417
When devising a course of treatment for a patient, doctors often have little quantitative evidence on which to base their decisions, beyond their medical education and published clinical trials. Stanford Health Care alone has millions of electronic medical records (EMRs) that are only just recently being leveraged to inform better treatment recommendations. These data present a unique challenge because they are high-dimensional and observational. Our goal is to make personalized treatment recommendations based on the outcomes for past patients similar to a new patient. We propose and analyze three methods for estimating heterogeneous treatment effects using observational data. Our methods perform well in simulations using a wide variety of treatment effect functions, and we present results of applying the two most promising methods to data from The SPRINT Data Analysis Challenge, from a large randomized trial of a treatment for high blood pressure.
What all of the above work (except @cite_0 ) have in common is that they assume randomized treatment assignment. @cite_15 discussed the possibility of adapting their method to observational data but go no further. @cite_2 proposed the propensity forest when treatment is not randomized, but this method does not target heterogeneity in the treatment effect. Similarly, @cite_10 model treatment effect as a function of propensity score, missing out on how it depends on the covariates except through treatment propensity. @cite_6 devised a nonparametric test for the null hypothesis that the treatment effect is constant across patients, but that is not suited to high-dimensional data. One promising approach which flexibly handles high-dimensional and observational data is the generalization of the causal forest by @cite_11 . Their gradient forest addresses more generally the problem of parameter estimation using random forests, and in particular they developed a very fast implementation of the causal forest against which we compare the performance of our methods in .
{ "cite_N": [ "@cite_6", "@cite_0", "@cite_2", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2140387177", "2151832869", "2157395790", "2150291618", "", "2964175492" ], "abstract": [ "A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.", "Abstract In this article we put forward a Bayesian approach for finding classification and regression tree (CART) models. The two basic components of this approach consist of prior specification and stochastic search. The basic idea is to have the prior induce a posterior distribution that will guide the stochastic search toward more promising CART models. As the search proceeds, such models can then be selected with a variety of criteria, such as posterior probability, marginal likelihood, residual sum of squares or misclassification rates. Examples are used to illustrate the potential superiority of this approach over alternative methods.", "We introduce the C++ application and R package ranger. The software is a fast implementation of random forests for high dimensional data. Ensembles of classification, regression and survival trees are supported. We describe the implementation, provide examples, validate the package with a reference implementation, and compare runtime and memory usage with other implementations. The new software proves to scale best with the number of features, samples, trees, and features tried for splitting. Finally, we show that ranger is the fastest and most memory efficient implementation of random forests to analyze data on the scale of a genome-wide association study.", "Abstract : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)", "", "" ] }
1707.00102
2728013417
When devising a course of treatment for a patient, doctors often have little quantitative evidence on which to base their decisions, beyond their medical education and published clinical trials. Stanford Health Care alone has millions of electronic medical records (EMRs) that are only just recently being leveraged to inform better treatment recommendations. These data present a unique challenge because they are high-dimensional and observational. Our goal is to make personalized treatment recommendations based on the outcomes for past patients similar to a new patient. We propose and analyze three methods for estimating heterogeneous treatment effects using observational data. Our methods perform well in simulations using a wide variety of treatment effect functions, and we present results of applying the two most promising methods to data from The SPRINT Data Analysis Challenge, from a large randomized trial of a treatment for high blood pressure.
Much of causal inference is based on the propensity score , which is the estimated probability that a patient would receive treatment, conditioned on the patient's covariates. If the estimate of the propensity function ) is @math , then the propensity score for a patient with covariate vector @math is @math . Throughout the present work, we estimate the propensity function using the probability forests of @cite_8 . We are able to do so quickly using the fast implementation in the R package ranger .
{ "cite_N": [ "@cite_8" ], "mid": [ "2051177122" ], "abstract": [ "Summary Background—Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives—The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods—Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results—Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions—Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications." ] }
1707.00383
2731063981
In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks; (2) an architecture that can be end-to-end trained; (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIO's basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.
The standard definition of room layout estimation is firstly introduced by @cite_0 . It clusters edges into lines joining at three vanishing points, according to the famous Manhattan assumption @cite_17 . Then a lot of layout proposals are generated by ray sampling. Hand-crafted features are used to learn a regressor for proposal ranking. Later on, many works try to improve this framework. @cite_5 detects conjunctions instead of edges and modifies proposal generation and ranking accordingly. While ranking room layouts, @cite_4 simultaneously estimates a clutter mask. @cite_21 aims to improve the inference efficiency of methods like @cite_4 . Going beyond estimating clutter mask, @cite_8 estimates objects' 3D bounding boxes and room layout during inference. Except for learnt clutter representations, @cite_26 incorporates furniture shape prior. In @cite_19 's formulation, furniture is modeled with parts instead of a box. @cite_24 goes even further by modelling furniture relationship with scene grammars.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_21", "@cite_0", "@cite_19", "@cite_24", "@cite_5", "@cite_17" ], "mid": [ "2126836862", "2139182919", "2113107168", "1965834447", "2534523274", "2156802865", "2068397385", "2168439642", "2102271310" ], "abstract": [ "We propose a method for understanding the 3D geometry of indoor environments (e.g. bedrooms, kitchens) while simultaneously identifying objects in the scene (e.g. beds, couches, doors). We focus on how modeling the geometry and location of specific objects is helpful for indoor scene understanding. For example, beds are shorter than they are wide, and are more likely to be in the center of the room than cabinets, which are tall and narrow. We use a generative statistical model that integrates a camera model, an enclosing room “box”, frames (windows, doors, pictures), and objects (beds, tables, couches, cabinets), each with their own prior on size, relative dimensions, and locations. We fit the parameters of this complex, multi-dimensional statistical model using an MCMC sampling approach that combines discrete changes (e.g, adding a bed), and continuous parameter changes (e.g., making the bed larger). We find that introducing object category leads to state-of-the-art performance on room layout estimation, while also enabling recognition based only on geometry.", "We address the problem of understanding an indoor scene from a single image in terms of recovering the layouts of the faces (floor, ceiling, walls) and furniture. A major challenge of this task arises from the fact that most indoor scenes are cluttered by furniture and decorations, whose appearances vary drastically across scenes, and can hardly be modeled (or even hand-labeled) consistently. In this paper we tackle this problem by introducing latent variables to account for clutters, so that the observed image is jointly explained by the face and clutter layouts. Model parameters are learned in the maximum margin formulation, which is constrained by extra prior energy terms that define the role of the latent variables. Our approach enables taking into account and inferring indoor clutter layouts without hand-labeling of the clutters in the training set. Yet it outperforms the state-of-the-art method of [4] that requires clutter labels.", "In this paper we propose an approach to jointly infer the room layout as well as the objects present in the scene. Towards this goal, we propose a branch and bound algorithm which is guaranteed to retrieve the global optimum of the joint problem. The main difficulty resides in taking into account occlusion in order to not over-count the evidence. We introduce a new decomposition method, which generalizes integral geometry to triangular shapes, and allows us to bound the different terms in constant time. We exploit both geometric cues and object detectors as image features and show large improvements in 2D and 3D object detection over state-of-the-art deformable part-based models.", "Existing approaches to indoor scene understanding formulate the problem as a structured prediction task focusing on estimating the 3D bounding box which best describes the scene layout. Unfortunately, these approaches utilize high order potentials which are computationally intractable and rely on ad-hoc approximations for both learning and inference. In this paper we show that the potentials commonly used in the literature can be decomposed into pair-wise potentials by extending the concept of integral images to geometry. As a consequence no heuristic reduction of the search space is required. In practice, this results in large improvements in performance over the state-of-the-art, while being orders of magnitude faster.", "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "We develop a comprehensive Bayesian generative model for understanding indoor scenes. While it is common in this domain to approximate objects with 3D bounding boxes, we propose using strong representations with finer granularity. For example, we model a chair as a set of four legs, a seat and a backrest. We find that modeling detailed geometry improves recognition and reconstruction, and enables more refined use of appearance for scene understanding. We demonstrate this with a new likelihood function that rewards 3D object hypotheses whose 2D projection is more uniform in color distribution. Such a measure would be confused by background pixels if we used a bounding box to represent a concave object like a chair. Complex objects are modeled using a set or re-usable 3D parts, and we show that this representation captures much of the variation among object instances with relatively few parameters. We also designed specific data-driven inference mechanisms for each part that are shared by all objects containing that part, which helps make inference transparent to the modeler. Further, we show how to exploit contextual relationships to detect more objects, by, for example, proposing chairs around and underneath tables. We present results showing the benefits of each of these innovations. The performance of our approach often exceeds that of state-of-the-art methods on the two tasks of room layout estimation and object recognition, as evaluated on two bench mark data sets used in this domain.", "Indoor functional objects exhibit large view and appearance variations, thus are difficult to be recognized by the traditional appearance-based classification paradigm. In this paper, we present an algorithm to parse indoor images based on two observations: i) The functionality is the most essential property to define an indoor object, e.g. \"a chair to sit on\", ii) The geometry (3D shape) of an object is designed to serve its function. We formulate the nature of the object function into a stochastic grammar model. This model characterizes a joint distribution over the function-geometry-appearance (FGA) hierarchy. The hierarchical structure includes a scene category, functional groups, functional objects, functional parts and 3D geometric shapes. We use a simulated annealing MCMC algorithm to find the maximum a posteriori (MAP) solution, i.e. a parse tree. We design four data-driven steps to accelerate the search in the FGA space: i) group the line segments into 3D primitive shapes, ii) assign functional labels to these 3D primitive shapes, iii) fill in missing objects parts according to the functional labels, and iv) synthesize 2D segmentation maps and verify the current parse tree by the Metropolis-Hastings acceptance probability. The experimental results on several challenging indoor datasets demonstrate the proposed approach not only significantly widens the scope of indoor scene parsing algorithm from the segmentation and the 3D recovery to the functional object recognition, but also yields improved overall performance.", "Junctions are strong cues for understanding the geometry of a scene. In this paper, we consider the problem of detecting junctions and using them for recovering the spatial layout of an indoor scene. Junction detection has always been challenging due to missing and spurious lines. We work in a constrained Manhattan world setting where the junctions are formed by only line segments along the three principal orthogonal directions. Junctions can be classified into several categories based on the number and orientations of the incident line segments. We provide a simple and efficient voting scheme to detect and classify these junctions in real images. Indoor scenes are typically modeled as cuboids and we formulate the problem of the cuboid layout estimation as an inference problem in a conditional random field. Our formulation allows the incorporation of junction features and the training is done using structured prediction techniques. We outperform other single view geometry estimation methods on standard datasets.", "When designing computer vision systems for the blind and visually impaired it is important to determine the orientation of the user relative to the scene. We observe that most indoor and outdoor (city) scenes are designed on a Manhattan three-dimensional grid. This Manhattan grid structure puts strong constraints on the intensity gradients in the image. We demonstrate an algorithm for detecting the orientation of the user in such scenes based on Bayesian inference using statistics which we have learnt in this domain. Our algorithm requires a single input image and does not involve pre-processing stages such as edge detection and Hough grouping. We demonstrate strong experimental results on a range of indoor and outdoor images. We also show that estimating the grid structure makes it significantly easier to detect target objects which are not aligned with the grid." ] }
1707.00383
2731063981
In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks; (2) an architecture that can be end-to-end trained; (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIO's basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.
Recently @cite_25 trains an FCN for pixel-wise edge labelling, with every pixel assigned a label from this 4-class set @math background (bg), wall-floor edge (wf), wall-wall edge (ww), wall-ceiling edge (wc) .
{ "cite_N": [ "@cite_25" ], "mid": [ "2199215860" ], "abstract": [ "In this paper, we introduce new edge-based features for the task of recovering the 3D layout of an indoor scene from a single image. Indoor scenes have certain edges that are very informative about the spatial layout of the room, namely, the edges formed by the pairwise intersections of room faces (two walls, wall and ceiling, wall and floor). In contrast with previous approaches that rely on area-based features like geometric context and orientation maps, our method attempts to directly detect these informative edges. We learn to predict 'informative edge' probability maps using two recent methods that exploit local and global context, respectively: structured edge detection forests, and a fully convolutional network for pixelwise labeling. We show that the fully convolutional network is quite successful at predicting the informative edges even when they lack contrast or are occluded, and that the accuracy can be further improved by training the network to jointly predict the edges and the geometric context. Using features derived from the 'informative edge' maps, we learn a maximum margin structured classifier that achieves state-of-the-art performance on layout prediction." ] }
1707.00383
2731063981
In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks; (2) an architecture that can be end-to-end trained; (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIO's basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.
There are actually other scene understanding tasks substantially same as or similar to room layout estimation. For example, @cite_23 tries to understand the layouts of natural scenes with a horizon, urban scenes, corridors and others, for which room layout estimation is only a special case. Another special case of @cite_23 is , such as @cite_3 @cite_2 . It is often regarded as a graphics application under the name of and evaluated with subjective user study. @cite_10 tries to recover more detailed room layout than a box and evaluates with wall-floor edge error. Since these works exploit techniques that @cite_0 is built upon, they could potentially benefit from the method proposed in this paper.
{ "cite_N": [ "@cite_3", "@cite_0", "@cite_23", "@cite_2", "@cite_10" ], "mid": [ "1482548926", "2534523274", "", "2125310925", "2116851763" ], "abstract": [ "We consider the problem of estimating 3-d structure from a single still image of an outdoor urban scene. Our goal is to efficiently create 3-d models which are visually pleasant. We chose an appropriate 3-d model structure and formulate the task of 3-d reconstruction as model fitting problem. Our 3-d models are composed of a number of vertical walls and a ground plane, where ground-vertical boundary is a continuous polyline. We achieve computational efficiency by special preprocessing together with stepwise search of 3-d model parameters dividing the problem into two smaller sub-problems on chain graphs. The use of Conditional Random Field models for both problems allows to various cues. We infer orientation of vertical walls of 3-d model vanishing points.", "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "", "Humans have an amazing ability to instantly grasp the overall 3D structure of a scene--ground orientation, relative positions of major landmarks, etc.--even from a single image. This ability is completely missing in most popular recognition algorithms, which pretend that the world is flat and or view it through a patch-sized peephole. Yet it seems very likely that having a grasp of this \"surface layout\" of a scene should be of great assistance for many tasks, including recognition, navigation, and novel view synthesis. In this paper, we take the first step towards constructing the surface layout, a labeling of the image intogeometric classes. Our main insight is to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region. Our multiple segmentation framework provides robust spatial support, allowing a wide variety of cues (e.g., color, texture, and perspective) to contribute to the confidence in each geometric label. In experiments on a large set of outdoor images, we evaluate the impact of the individual cues and design choices in our algorithm. We further demonstrate the applicability of our method to indoor images, describe potential applications, and discuss extensions to a more complete notion of surface layout.", "We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation." ] }
1707.00383
2731063981
In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks; (2) an architecture that can be end-to-end trained; (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIO's basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.
If we look at an even broader literature, the concepts somewhat similar to ST and PIO have already been discussed. Under the name of label transfer, @cite_13 @cite_1 address semantic segmentation in a nonparametric manner. ST is different from them primarily as a unified deep architecture (and of course in its parametric nature). @cite_22 and its followers are famous for stating human limbs as springs. PIO is different from them primarily as an efficient approximation inspired by mechanics concepts.
{ "cite_N": [ "@cite_1", "@cite_13", "@cite_22" ], "mid": [ "1543951486", "2125849446", "2030536784" ], "abstract": [ "In this paper, we propose a robust supervised label transfer method for the semantic segmentation of street scenes. Given an input image of street scene, we first find multiple image sets from the training database consisting of images with annotation, each of which can cover all semantic categories in the input image. Then, we establish dense correspondence between the input image and each found image sets with a proposed KNN-MRF matching scheme. It is followed by a matching correspondences classification that tries to reduce the number of semantically incorrect correspondences with trained matching correspondences classification models for different categories. With those matching correspondences classified as semantically correct correspondences, we infer the confidence values of each super pixel belonging to different semantic categories, and integrate them and spatial smoothness constraint in a markov random field to segment the input image. Experiments on three datasets show our method outperforms the traditional learning based methods and the previous nonparametric label transfer method, for the semantic segmentation of street scenes.", "In this paper we propose a novel nonparametric approach for object recognition and scene parsing using dense scene alignment. Given an input image, we retrieve its best matches from a large database with annotated images using our modified, coarse-to-fine SIFT flow algorithm that aligns the structures within two images. Based on the dense scene correspondence obtained from the SIFT flow, our system warps the existing annotations, and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on a challenging database. Compared to existing object recognition approaches that require training for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure.", "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images." ] }
1707.00333
2588728634
Image matting is a longstanding problem in computational photography. Although, it has been studied for more than two decades, yet there is a challenge of developing an automatic matting algorithm which does not require any human efforts. Most of the state-of-the-art matting algorithms require human intervention in the form of trimap or scribbles to generate the alpha matte form the input image. In this paper, we present a simple and efficient approach to automatically generate the trimap from the input image and make the whole matting process free from human-in-the-loop. We use learning based matting method to generate the matte from the automatically generated trimap. Experimental results demonstrate that our method produces good quality trimap which results into accurate matte estimation. We validate our results by replacing the automatically generated trimap by manually created trimap while using the same image matting algorithm.
In this section we review previous work relevant to our work. In particular, we discuss some of the recent state-of-the-art matting algorithms as well as existing methods for automatic trimap generation. Generally the matting algorithms are classified as @cite_7 @cite_13 @cite_23 and @cite_18 @cite_9 @cite_10 @cite_22 @cite_8 @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_23", "@cite_13" ], "mid": [ "", "", "2103334940", "2035773017", "2134839354", "1597727245", "", "", "2103917701" ], "abstract": [ "", "", "Many boundaries between objects in the world project onto curves in an image. However, boundaries involving natural objects (e.g., trees, hair, water, smoke) are often unworkable under this model because many pixels receive light from more than one object. We propose a technique for estimating alpha, the proportion in which two colors mix to produce a color at the boundary. The technique extends blue screen matting to backgrounds that have almost arbitrary color distributions, though coarse knowledge of the boundary's location is required. Results show a number of different objects moved from one image to another while maintaining naturalism.", "Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity (\"alpha matte\") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.", "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to the user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and or temporal gradients, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. A localized refinement step follows this fast segmentation in order to accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background, and comparisons with the recent literature.", "Interactive, efficient, methods of foreground extraction and alpha-matting are of increasing practical importance for digital image editing. Although several new approaches to this problem have recently been developed, many challenges remain. We propose a new technique based on random walks that has the following advantages: First, by leveraging a recent technique from manifold learning theory, we effectively use RGB values to set boundaries for the random walker, even in fuzzy or low-contrast images. Second, the algorithm is straightforward to implement, requires specification of only a single free parameter (set the same for all images), and performs the segmentation and alpha-matting in a single step. Third, the user may locally fine tune the results by interactively manipulating the foreground background maps. Finally, the algorithm has an inherit parallelism that leads to a particularly efficient implementation via the graphics processing unit (GPU). Our method processes a 1024× 1024 image at the interactive speed of 0.5 seconds and, most importantly, produces highquality results. We show that our algorithm can generate good segmentation and matting results at an interactive rate with minimal user interaction.", "", "", "This paper proposes a new Bayesian framework for solving the matting problem, i.e. extracting a foreground element from a background image by estimating an opacity for each pixel of the foreground element. Our approach models both the foreground and background color distributions with spatially-varying sets of Gaussians, and assumes a fractional blending of the foreground and background colors to produce the final output. It then uses a maximum-likelihood criterion to estimate the optimal opacity, foreground and background simultaneously. In addition to providing a principled approach to the matting problem, our algorithm effectively handles objects with intricate boundaries, such as hair strands and fur, and provides an improvement over existing techniques for these difficult cases." ] }
1707.00230
2733616354
Black-box transformations have been extensively studied in algorithmic mechanism design as a generic tool for converting algorithms into truthful mechanisms without degrading the approximation guarantees. While such transformations have been designed for a variety of settings, showed that no fully general black-box transformation exists for single-parameter environments. In this paper, we investigate the potentials and limits of black-box transformations in the prior-free (i.e., non-Bayesian) setting in single-parameter environments, a large and important class of environments in mechanism design. On the positive side, we show that such a transformation can preserve a constant fraction of the welfare at every input if the private valuations of the agents take on a constant number of values that are far apart, while on the negative side, we show that this task is not possible for general private valuations.
Besides the works already mentioned, black-box transformations have been obtained in a variety of other prior-free and Bayesian settings. In the prior-free setting, @cite_5 presented a reduction for symmetric single-parameter problems with a logarithmic loss in approximation, and later @cite_12 improved the reduction to obtain arbitrarily small loss. Dughmi and Roughgarden @cite_6 designed a reduction for the class of multi-parameter problems that admit an FPTAS and can be encoded as a packing problem, while @cite_11 considered reductions for single-valued combinatorial auction problems. Reductions that preserve the approximation guarantees have also been obtained in the single-parameter Bayesian setting by Hartline and Lucier @cite_7 , and their work was later extended to multi-parameter settings by Bei and Huang @cite_0 , @cite_8 , and @cite_2 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2054039722", "2168069490", "2139692878", "2950524333", "2079025269", "1627548948", "", "2142270691" ], "abstract": [ "The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.", "We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism's behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone sub modular bidder. We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.", "We give the first black-box reduction from approximation algorithms to truthful approximation mechanisms for a non-trivial class of multi-parameter problems. Specifically, we prove that every welfare-maximization problem that admits a fully polynomial-time approximation scheme (FPTAS) and can be encoded as a packing problem also admits a truthful-in-expectation randomized mechanism that is an FPTAS. Our reduction makes novel use of smoothed analysis by employing small perturbations as a tool in algorithmic mechanism design. We develop a “duality” between linear perturbations of the objective function of an optimization problem and of its feasible set, and we use the “primal” and “dual” viewpoints to prove the running time bound and the truthfulness guarantee, respectively, for our mechanism.", "Very recently, Hartline and Lucier studied single-parameter mechanism design problems in the Bayesian setting. They proposed a black-box reduction that converted Bayesian approximation algorithms into Bayesian-Incentive-Compatible (BIC) mechanisms while preserving social welfare. It remains a major open question if one can find similar reduction in the more important multi-parameter setting. In this paper, we give positive answer to this question when the prior distribution has finite and small support. We propose a black-box reduction for designing BIC multi-parameter mechanisms. The reduction converts any algorithm into an eps-BIC mechanism with only marginal loss in social welfare. As a result, for combinatorial auctions with sub-additive agents we get an eps-BIC mechanism that achieves constant approximation.", "Optimally allocating cellphone spectrum, advertisements on the Internet, and landing slots at airports is computationally intractable. When the participants may strategize, not only must the optimizer deal with complex feasibility constraints but also with complex incentive constraints. We give a very simple method for constructing a Bayesian incentive compatible mechanism from any, potentially non-optimal, algorithm that maps agents' reports to an allocation. The expected welfare of the mechanism is, approximately, at least that of the algorithm on the agents' true preferences.", "We consider the problem of designing truthful auctions, when the bidders' valuations have a public and a private component. In particular, we consider combinatorial auctions where the valuation of an agent i for a set S of items can be expressed as vif(S), where vi is a private single parameter of the agent, and the function f is publicly known. Our motivation behind studying this problem is two-fold: (a) Such valuation functions arise naturally in the case of ad-slots in broadcast media such as Television and Radio. For an ad shown in a set S of ad-slots, f(S) is, say, the number of unique viewers reached by the ad, and vi is the valuation per-unique-viewer. (b) From a theoretical point of view, this factorization of the valuation function simplifies the bidding language, and renders the combinatorial auction more amenable to better approximation factors. We present a general technique, based on maximal-in-range mechanisms, that converts any α-approximation non-truthful algorithm (α ≥ 1) for this problem into Ω(α log n) and Ω(α)-approximate truthful mechanisms which run in polynomial time and quasi-polynomial time, respectively.", "", "In this article, we are interested in general techniques for designing mechanisms that approximate the social welfare in the presence of selfish rational behavior. We demonstrate our results in the setting of Combinatorial Auctions (CA). Our first result is a general deterministic technique to decouple the algorithmic allocation problem from the strategic aspects, by a procedure that converts any algorithm to a dominant-strategy ascending mechanism. This technique works for any single value domain, in which each agent has the same value for each desired outcome, and this value is the only private information. In particular, for “single-value CAs”, where each player desires any one of several different bundles but has the same value for each of them, our technique converts any approximation algorithm to a dominant strategy mechanism that almost preserves the original approximation ratio. Our second result provides the first computationally efficient deterministic mechanism for the case of single-value multi-minded bidders (with private value and private desired bundles). The mechanism achieves an approximation to the social welfare which is close to the best possible in polynomial time (unless PeNP). This mechanism is an algorithmic implementation in undominated strategies, a notion that we define and justify, and is of independent interest." ] }
1707.00018
2726575371
As the driving force of crowdsourcing is the interaction among participants, various incentive mechanisms have been proposed to attract sufficient participants. However, the existing works assume that all the providers always meet the deadline and the task value accordingly remains constant. To bridge the gap of such impractical assumption, we model the heterogeneous punctuality behavior of providers and the task value depreciation of requesters. Based on those models, we propose an Expected Social Welfare Maximizing (ESWM) mechanism that aims to maximize the expected social welfare in polynomial time. Simulation results show that our heuristic-based mechanism achieves higher expected social welfare and platform utility via attracting more participants.
@cite_2 presented two models of incentive mechanisms: platform-centric model and user-centric model to motivate mobile users' participation. By rewarding participants proportionally to their contribution, D. @cite_0 proposed a quality based incentive mechanism for crowdsensing. To maintain sufficient participants and promote dropped users to participate again, Lee and Hoh @cite_4 propose a mechanism, called RADP-VPC, to provider long-term incentives to participants.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_2" ], "mid": [ "2039922069", "2058911993", "1970756365" ], "abstract": [ "In crowdsensing, appropriate rewards are always expected to compensate the participants for their consumptions of physical resources and involvements of manual efforts. While continuous low quality sensing data could do harm to the availability and preciseness of crowdsensing based services, few existing incentive mechanisms have ever addressed the issue of sensing data's quality. The design of quality based incentive mechanism is motivated by its potential to avoid inefficient sensing and unnecessary rewards. In this paper, we incorporate the consideration of data quality into the design of incentive mechanism for crowdsensing, and propose to pay the participants as how well they do, to motivate the rational participants to perform data sensing efficiently. This mechanism estimates the quality of sensing data, and offers each participant a reward based on her effective contribution. We also implement the mechanism and evaluate the improvements in terms of quality of service and profit of service provider. The evaluation results show that our mechanism achieves superior performance when compared to the uniform pricing scheme.", "User participation is one of the most important elements in participatory sensing application for providing adequate level of service quality. However, incentive mechanism and its economic model for user participation have been less addressed so far in this research domain. This paper studies the economic model of user participation incentive in participatory sensing applications. To stimulate user participation, we design and evaluate a novel reverse auction based dynamic pricing incentive mechanism where users can sell their sensing data to a service provider with users' claimed bid prices. The proposed incentive mechanism focuses on minimizing and stabilizing the incentive cost while maintaining adequate level of participants by preventing users from dropping out of participatory sensing applications. Compared with random selection based fixed pricing incentive mechanism, the proposed mechanism not only reduces the incentive cost for retaining the same number of participants but also improves the fairness of incentive distribution and social welfare. It also helps us to achieve the geographically balanced sensing measurements and, more importantly, can remove the burden of accurate price decision for user data that is the most difficult step in designing incentive mechanism.", "Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms." ] }
1707.00110
2724346673
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20 for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
Our contributions build on previous work in making seq2seq models more computationally efficient. introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in . However, these typically still require @math computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention @cite_14 has also been explored in the image recognition domain.
{ "cite_N": [ "@cite_14" ], "mid": [ "2521593519" ], "abstract": [ "The softmax content-based attention mechanism has proven to be very beneficial in many applications of recurrent neural networks. Nevertheless it suffers from two major computational limitations. First, its computations for an attention lookup scale linearly in the size of the attended sequence. Second, it does not encode the sequence into a fixed-size representation but instead requires to memorize all the hidden states. These two limitations restrict the use of the softmax attention mechanism to relatively small-scale applications with short sequences and few lookups per sequence. In this work we introduce a family of linear attention mechanisms designed to overcome the two limitations listed above. We show that removing the softmax non-linearity from the traditional attention formulation yields constant-time attention lookups and fixed-size representations of the attended sequences. These properties make these linear attention mechanisms particularly suitable for large-scale applications with extreme query loads, real-time requirements and memory constraints. Early experiments on a question answering task show that these linear mechanisms yield significantly better accuracy results than no attention, but obviously worse than their softmax alternative." ] }
1707.00130
2728821832
Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learn deep RL-based dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.
RL-based approaches to dialogue management have been actively studied for some time @cite_24 @cite_42 @cite_25 . Initially, systems suffered from slow training, but recent advances in data efficient methods such as Gaussian Processes (GP) have enabled systems to be trained from scratch in on-line interaction with real users @cite_38 . GP provides an estimate of the uncertainty in the underlying function and a built-in noise model. This helps to achieve highly sample-efficient exploration and robustness to recognition understanding errors.
{ "cite_N": [ "@cite_24", "@cite_38", "@cite_42", "@cite_25" ], "mid": [ "2154740693", "2035934535", "2101445408", "2047335008" ], "abstract": [ "We introduce a stochastic model for dialogue systems based on Markov decision process. Within this framework we show that the problem of dialogue strategy design can be stated as an optimization problem, and solved by a variety of methods, including the reinforcement learning approach. The advantages of this new paradigm include objective evaluation of dialogue systems and their automatic design and adaptation. We show some preliminary results on learning a dialogue strategy for an air travel information system.", "Statistical dialogue models have required a large number of dialogues to optimise the dialogue policy, relying on the use of a simulated user. This results in a mismatch between training and live conditions, and significant development costs for the simulator thereby mitigating many of the claimed benefits of such models. Recent work on Gaussian process reinforcement learning, has shown that learning can be substantially accelerated. This paper reports on an experiment to learn a policy for a real-world task directly from human interaction using rewards provided by users. It shows that a usable policy can be learnt in just a few hundred dialogues without needing a user simulator and, using a learning strategy that reduces the risk of taking bad actions. The paper also investigates adaptation behaviour when the system continues learning for several thousand dialogues and highlights the need for robustness to noisy rewards.", "We report evaluation results for real users of a learnt dialogue management policy versus a hand-coded policy in the TALK project's \"Townlnfo\" tourist information system. The learnt policy, for filling and confirming information slots, was derived from COMMUNICATOR (flight-booking) data using reinforcement learning (RL) as described in [2], ported to the tourist information domain (using a general method that we propose here), and tested using 18 human users in 180 dialogues, who also used a state-of-the-art hand- coded dialogue policy embedded in an otherwise identical system. We found that users of the (ported) learned policy had an average gain in perceived task completion of 14.2 (from 67.6 to 81.8 at p < .03), that the hand-coded policy dialogues had on average 3.3 more system turns (p < .01), and that the user satisfaction results were comparable, even though the policy was learned for a different domain. Combining these in a dialogue reward score, we found a 14.4 increase for the learnt policy (a 23.8 relative increase, p < .03). These results are important because they show a) that results for real users are consistent with results for automatic evaluation [2] of learned policies using simulated users [3, 4], b) that a policy learned using linear function approximation over a very large policy space [2] is effective for real users, and c) that policies learned using data for one domain can be used successfully in other domains. We also present a qualitative discussion of the learnt policy.", "A partially observable Markov decision process (POMDP) has been proposed as a dialog model that enables automatic optimization of the dialog policy and provides robustness to speech understanding errors. Various approximations allow such a model to be used for building real-world dialog systems. However, they require a large number of dialogs to train the dialog policy and hence they typically rely on the availability of a user simulator. They also require significant designer effort to hand-craft the policy representation. We investigate the use of Gaussian processes (GPs) in policy modeling to overcome these problems. We show that GP policy optimization can be implemented for a real world POMDP dialog manager, and in particular: 1) we examine different formulations of a GP policy to minimize variability in the learning process; 2) we find that the use of GP increases the learning rate by an order of magnitude thereby allowing learning by direct interaction with human users; and 3) we demonstrate that designer effort can be substantially reduced by basing the policy directly on the full belief space thereby avoiding ad hoc feature space modeling. Overall, the GP approach represents an important step forward towards fully automatic dialog policy optimization in real world systems." ] }
1707.00130
2728821832
Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learn deep RL-based dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.
Combining SL with RL for dialogue modelling is not new. proposed a hybrid SL RL model that, in order to ensure tractability in policy optimisation, performed exploration only on the states in a dialogue corpus. The policy was then defined manually on parts of the space which were not found in the corpus. A method of initialising RL models using logistic regression was also described @cite_46 . For GPRL in dialogue, rather than using a linear kernel that imposes heuristic data pair correlation, a pre-optimised Gaussian kernel learned using SL from a dialogue corpus has been proposed @cite_32 . The resulting kernel was more accurate on data correlation and achieved better performance, however, the SL corpus did not help to initialise a better policy. Better initialisation of GPRL has been studied in the context of domain adaptation by specifying a GP prior or re-using an existing model which is then pre-trained for the new domain @cite_16 .
{ "cite_N": [ "@cite_46", "@cite_16", "@cite_32" ], "mid": [ "2117398305", "", "2251221343" ], "abstract": [ "We investigate the use of logistic regression (LR) to initialise Reinforcement Learning (RL)-based dialogue systems with models of human dialogue strategies. LR produces accurate predictions and performs feature selection. We illustrate this technique in exploring human multimodal clarification strategies, observed in a Wizard-of-Oz experiment. We use it to initialise an RL-based system with features which significantly influence human behaviour. We show that the strategy applied by the human wizards is sensitive to different dialogue contexts. Furthermore we show that for predicting clarification behaviour the logistic models improve over the baseline on average twice as much as the supervised learning techniques used in previous work.", "", "Gaussian processes reinforcement learning provides an appealing framework for training the dialogue policy as it takes into account correlations of the objective function given different dialogue belief states, which can significantly speed up the learning. These correlations are modelled by the kernel function which may depend on hyper-parameters. So far, for real-world dialogue systems the hyperparameters have been hand-tuned, relying on the designer to adjust the correlations, or simple non-parametrised kernel functions have been used instead. Here, we examine different kernel structures and show that it is possible to optimise the hyperparameters from data yielding improved performance of the resulting dialogue policy. We confirm this in a real user trial." ] }
1707.00130
2728821832
Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learn deep RL-based dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.
A number of authors have proposed training a standard neural-network policy in two stages @cite_39 @cite_36 @cite_10 . also explored off-policy RL methods for dialogue policy learning. All these studies were conducted in simulation, using error-free text-based input. A similar approach was also used in a conversational model @cite_15 . In contrast, our work introduces two new sample-efficient actor-critic methods, combines both two-stage policy learning and off-policy RL, and testing at differing noise levels.
{ "cite_N": [ "@cite_36", "@cite_15", "@cite_10", "@cite_39" ], "mid": [ "2417401578", "2410983263", "2594726847", "2410985346" ], "abstract": [ "We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems. A unified neural network framework is proposed to enable the system to first learn by supervision from a set of dialogue data and then continuously improve its behaviour via reinforcement learning, all using gradient-based algorithms on one single model. The experiments demonstrate the supervised model's effectiveness in the corpus-based evaluation, with user simulation, and with paid human subjects. The use of reinforcement learning further improves the model's performance in both interactive settings, especially under higher-noise conditions.", "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "", "In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actor-critic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset." ] }
1707.00075
2725155646
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
As fairness in machine learning has become a societ al focus, researchers have tried to develop useful definitions of fairness'' in machine learning systems. Notably, and @cite_10 @cite_7 have both offered novel theoretical work explaining the trade-offs between demographic parity, previously focused on as fair,'' and alternative formulations focused more closely on model accuracy. We will primarily work off of the definitions offered in @cite_10 .
{ "cite_N": [ "@cite_10", "@cite_7" ], "mid": [ "2950538796", "2522104760" ], "abstract": [ "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.", "Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them." ] }
1707.00075
2725155646
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
Along with the theoretical underpinnings, @cite_10 offers a method for achieving equality of opportunity, but does so through a post-processing algorithm, taking as input the model's prediction and the sensitive attribute. @cite_7 likewise offers a calibration technique to achieve fairness. These approaches are also problematic in many cases when the sensitive attribute is not observable at inference time.
{ "cite_N": [ "@cite_10", "@cite_7" ], "mid": [ "2950538796", "2522104760" ], "abstract": [ "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.", "Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them." ] }
1707.00075
2725155646
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
A growing body of literature is aimed at improving model performance for underserved parts of the data. For example, @cite_1 uses hyperparameter optimization to improve model performance for underserved regions of the data in collaborative filtering. More directly in the fairness'' literature, @cite_2 first attempted to learn fair'' latent representations by directly enforcing statistical parity during unsupervised learning.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2604244242", "2162670686" ], "abstract": [ "When building a recommender system, how can we ensure that all items are modeled well? Classically, recommender systems are built, optimized, and tuned to improve a global prediction objective, such as root mean squared error. However, as we demonstrate, these recommender systems often leave many items badly-modeled and thus under-served. Further, we give both empirical and theoretical evidence that no single matrix factorization, under current state-of-the-art methods, gives optimal results for each item. As a result, we ask: how can we learn additional models to improve the recommendation quality for a specified subset of items? We offer a new technique called focused learning, based on hyperparameter optimization and a customized matrix factorization objective. Applying focused learning on top of weighted matrix factorization, factorization machines, and LLORMA, we demonstrate prediction accuracy improvements on multiple datasets. For instance, on MovieLens we achieve as much as a 17 improvement in prediction accuracy for niche movies, cold-start items, and even the most badly-modeled items in the original model.", "We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification." ] }
1707.00075
2725155646
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
Combining competing tasks has been found to be a useful tool in deep learning. In particular, researchers have included an adversary to help compensate for skewed data distributions in domain adaptation problems for robotics and simulations @cite_6 @cite_9 . Researchers have also applied similar techniques for making models fair by trying to prevent biased latent representations @cite_5 @cite_0 . This literature has generally not been as precise in terms of which definition of fairness they are optimizing for and what data is used for the adversarial objective. If the definition is mentioned at all, the work often focuses on demographic parity, which, as @cite_10 explains, has many drawbacks. We explore the intersection of these research efforts.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "1731081199", "2953127297", "1956343362", "2247194987", "2950538796" ], "abstract": [ "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.", "We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the \"Maximum Mean Discrepancy\" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.", "In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.", "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individualfeatures. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores." ] }
1707.00040
2733640026
Large-scale distributed computing systems face two major bottlenecks that limit their scalability: straggler delay caused by the variability of computation times at different worker nodes and communication bottlenecks caused by shuffling data across many nodes in the network. Recently, it has been shown that codes can provide significant gains in overcoming these bottlenecks. In particular, optimal coding schemes for minimizing latency in distributed computation of linear functions and mitigating the effect of stragglers was proposed for a wired network, where the workers can simultaneously transmit messages to a master node without interference. In this paper, we focus on the problem of coded computation over a wireless master-worker setup with straggling workers, where only one worker can transmit the result of its local computation back to the master at a time. We consider 3 asymptotic regimes (determined by how the communication and computation times are scaled with the number of workers) and precisely characterize the total run-time of the distributed algorithm and optimum coding strategy in each regime. In particular, for the regime of practical interest where the computation and communication times of the distributed computing algorithm are comparable, we show that the total run-time approaches a simple lower bound that decouples computation and communication, and demonstrate that coded schemes are @math times faster than uncoded schemes.
The use of codes for minimizing latency in distributed computation of linear functions was introduced in @cite_10 . The key idea is to use erasure codes to inject redundancy such that the minimum latency is achieved by trading off the number of stragglers that the algorithm is robust to with redundancy factor in computation. In @cite_16 , the authors consider coded computation over heterogeneous clusters, and proposed an asymptotically optimal coded algorithm for distributed matrix-vector multiplication. The use of product codes and polynomial codes for high-dimensional matrix multiplication over homogeneous clusters is proposed in @cite_2 and @cite_6 , respectively. In a related work @cite_19 , the authors propose the use of redundant short dot products to speed up distributed computation of linear transforms. Coded computing of the convolution of two long vectors distributedly in the presence of stragglers is proposed in @cite_13 . Coded computation of nonlinear functions over multicore setups is studied in @cite_17 . In @cite_20 , the authors propose coding schemes for mitigating stragglers in distributed batch gradient computation. The idea of coded computation is utilized in @cite_18 for solving linear inverse problems in a parallelized implementation affected by stragglers.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_6", "@cite_19", "@cite_2", "@cite_16", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2614091483", "2620705481", "2617565736", "2556205507", "2745045892", "2582482048", "2963290814", "", "2742439621" ], "abstract": [ "We consider the problem of computing the convolution of two long vectors using parallel processing units in the presence of \"stragglers\". Stragglers refer to the small fraction of faulty or slow processors that delays the entire computation in time-critical distributed systems. We first show that splitting the vectors into smaller pieces and using a linear code to encode these pieces provides better resilience against stragglers than replication-based schemes under a simple, worst-case straggler analysis. We then demonstrate that under commonly used models of computation time, coding can dramatically improve the probability of finishing the computation within a target \"deadline\" time. As opposed to the more commonly used technique of expected computation time analysis, we quantify the exponents of the probability of failure in the limit of large deadlines. Our exponent metric captures the probability of failing to finish before a specified deadline time, i.e. , the behavior of the \"tail\". Moreover, our technique also allows for simple closed form expressions for more general models of computation time, e.g. shifted Weibull models instead of only shifted exponentials. Thus, through this problem of coded convolution, we establish the utility of a novel asymptotic failure exponent analysis for distributed systems.", "Computationally intensive distributed and parallel computing is often bottlenecked by a small set of slow workers known as stragglers. In this paper, we utilize the emerging idea of \"coded computation\" to design a novel error-correcting-code inspired technique for solving linear inverse problems under specific iterative methods in a parallelized implementation affected by stragglers. Example applications include inverse problems in machine learning on graphs, such as personalized PageRank and sampling on graphs. We provably show that our coded-computation technique can reduce the mean-squared error under a computational deadline constraint. In fact, the ratio of mean-squared error of replication-based and coded techniques diverges to infinity as the deadline increases. Our experiments for personalized PageRank performed on real systems and real social networks show that this ratio can be as large as @math . Further, unlike coded-computation techniques proposed thus far, our strategy combines outputs of all workers, including the stragglers, to produce more accurate estimates at the computational deadline. This also ensures that the accuracy degrades \"gracefully\" in the event that the number of stragglers is large.", "We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to efficiently deal with straggling workers. The proposed strategy, named as , achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics. Furthermore, we extend this code to distributed convolution and show its order-wise optimality.", "Faced with saturation of Moore's law and increasing size and dimension of data, system designers have increasingly resorted to parallel and distributed computing to reduce computation time of machine-learning algorithms. However, distributed computing is often bottle necked by a small fraction of slow processors called \"stragglers\" that reduce the speed of computation because the fusion node has to wait for all processors to complete their processing. To combat the effect of stragglers, recent literature proposes introducing redundancy in computations across processors, e.g., using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique - that we call \"Short-Dot\" - to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed more efficiently at individual processors. Further, only a subset of these short dot products are required at the fusion node to finish the computation successfully. We demonstrate through probabilistic analysis as well as experiments on computing clusters that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors required to finish), for any such strategy and compare it to that achieved by our strategy.", "Coded computation is a framework for providing redundancy in distributed computing systems to make them robust to slower nodes, or stragglers. In [1], the authors propose a coded computation scheme based on maximum distance separable (MDS) codes for computing the product ATB, and this scheme is suitable for the case where one of the matrices is small enough to fit into a single compute node. In this work, we study coded computation involving large matrix multiplication where both matrices are large, and propose a new coded computation scheme, which we call product-coded matrix multiplication. Our analysis reveals interesting insights into which schemes perform best in which regimes. When the number of backup nodes scales sub-linearly in the size of the product, the product-coded scheme achieves the best run-time performance. On the other hand, when the number of backup nodes scales linearly in the size of the product, the MDS-coded scheme achieves the fundamental limit on the run-time performance. Further, we propose a novel application of low-density-parity-check (LDPC) codes to achieve linear-time decoding complexity, thus allowing our proposed solutions to scale gracefully.", "In large-scale distributed computing clusters, such as Amazon EC2, there are several types of \"system noise\" that can result in major degradation of performance: system failures, bottlenecks due to limited communication bandwidth, latency due to straggler nodes, etc. On the other hand, these systems enjoy abundance of computing and storage redundancy. There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters. In this paper, we focus on general heterogeneous distributed computing clusters consisting of a variety of computing machines with different capabilities. We propose a coding framework for speeding up distributed computing in heterogeneous clusters by trading redundancy for reducing the latency of computation. In particular, we propose Heterogeneous Coded Matrix Multiplication (HCMM) algorithm for performing distributed matrix multiplication over heterogeneous clusters that is provably asymptotically optimal. Moreover, we show that HCMM is unboundedly faster than uncoded schemes. We also provide numerical results demonstrating significant speedups of up to 90 and 35 for HCMM in comparison to the \"uncoded\" and \"coded homogeneous\" schemes, respectively. Furthermore, we carry out real experiments over Amazon EC2 clusters, where HCMM is found to be up to 17 faster than the uncoded scheme. In our worst case experiments with artificial stragglers, HCMM provides speedups of up to 12x over the uncoded scheme. Furthermore, we provide a generalization of the problem of optimal load allocation for heterogeneous clusters to scenarios with budget constraints. In the end, we discuss about the decoding complexity and describe how LDPC codes can be combined with HCMM in order to control the complexity of decoding.", "Distributed machine learning algorithms that are widely run on modern large-scale computing platforms face several types of randomness, uncertainty and system “noise.” These include stragglers1, system failures, maintenance outages, and communication bottlenecks. In this work, we view distributed machine learning algorithms through a coding-theoretic lens, and show how codes can equip them with robustness against this system noise. Motivated by their importance and universality, we focus on two of the most basic building blocks of distributed learning algorithms: data shuffling and matrix multiplication. In data shuffling, we use codes to reduce communication bottlenecks: when a constant fraction of the data can be cached at each worker node, and n is the number of workers, coded shuffling reduces the communication cost by up to a factor Θ(n) over uncoded shuffling. For matrix multiplication, we use codes to alleviate the effects of stragglers, also known as the straggler problem. We show that if the number of workers is n, and the runtime of each subtask has an exponential tail, the optimal coded matrix multiplication is Θ(log n) times faster than the uncoded matrix multiplication or the optimal task replication scheme.", "", "Consider a distributed computing setup consisting of a master node and n worker nodes, each equipped with p cores, and a function f (x) = g(f 1 (x), f 2 (x),…, fk(x)), where each f i can be computed independently of the rest. Assuming that the worker computational times have exponential tails, what is the minimum possible time for computing f? Can we use coding theory principles to speed up this distributed computation? In [1], it is shown that distributed computing of linear functions can be expedited by applying linear erasure codes. However, it is not clear if linear codes can speed up distributed computation of ‘nonlinear’ functions as well. To resolve this problem, we propose the use of sparse linear codes, exploiting the modern multicore processing architecture. We show that 1) our coding solution achieves the order optimal runtime, and 2) it is at least Θ(√log n) times faster than any uncoded schemes where the number of workers is n." ] }
1707.00040
2733640026
Large-scale distributed computing systems face two major bottlenecks that limit their scalability: straggler delay caused by the variability of computation times at different worker nodes and communication bottlenecks caused by shuffling data across many nodes in the network. Recently, it has been shown that codes can provide significant gains in overcoming these bottlenecks. In particular, optimal coding schemes for minimizing latency in distributed computation of linear functions and mitigating the effect of stragglers was proposed for a wired network, where the workers can simultaneously transmit messages to a master node without interference. In this paper, we focus on the problem of coded computation over a wireless master-worker setup with straggling workers, where only one worker can transmit the result of its local computation back to the master at a time. We consider 3 asymptotic regimes (determined by how the communication and computation times are scaled with the number of workers) and precisely characterize the total run-time of the distributed algorithm and optimum coding strategy in each regime. In particular, for the regime of practical interest where the computation and communication times of the distributed computing algorithm are comparable, we show that the total run-time approaches a simple lower bound that decouples computation and communication, and demonstrate that coded schemes are @math times faster than uncoded schemes.
The use of codes for minimizing bandwidth in distributed computation was introduced in @cite_14 @cite_4 , and for coded data shuffling in distributed machine learning algorithms in @cite_10 . In @cite_12 , the authors propose a scalable framework for minimizing the communication bandwidth in wireless distributed computing without considering stragglers delay. A unified coded framework for (wired) distributed computing with straggling servers is proposed in @cite_15 , by introducing a tradeoff between latency of computation and load of communication for some linear computation tasks. In a related word to coded data shuffling, @cite_3 studies the information theoretic limits of data shuffling in distributed learning. A pliate index coding approach is proposed in @cite_11 for data shuffling.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_3", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "", "2412574059", "2522506459", "2509492081", "2963290814", "2962825369", "2580172391" ], "abstract": [ "", "How can we optimally trade extra computing power to reduce the communication load in distributed computing? We answer this question by characterizing a fundamental tradeoff between computation and communication in distributed computing, i.e., the two are inversely proportional to each other. More specifically, a general distributed computing framework, motivated by commonly used structures like MapReduce, is considered, where the overall computation is decomposed into computing a set of \"Map\" and \"Reduce\" functions distributedly across multiple computing nodes. A coded scheme, named \"Coded Distributed Computing\" (CDC), is proposed to demonstrate that increasing the computation load of the Map functions by a factor of @math (i.e., evaluating each function at @math carefully chosen nodes) can create novel coding opportunities that reduce the communication load by the same factor. An information-theoretic lower bound on the communication load is also provided, which matches the communication load achieved by the CDC scheme. As a result, the optimal computation-communication tradeoff in distributed computing is exactly characterized. Finally, the coding techniques of CDC is applied to the Hadoop TeraSort benchmark to develop a novel CodedTeraSort algorithm, which is empirically demonstrated to speed up the overall job execution by @math - @math , for typical settings of interest.", "Data shuffling is one of the fundamental building blocks for distributed learning algorithms, that increases the statistical gain for each step of the learning process. In each iteration, different shuffled data points are assigned by a central node to a distributed set of workers to perform local computations, which leads to communication bottlenecks. The focus of this paper is on formalizing and understanding the fundamental information-theoretic trade-off between storage (per worker) and the worst-case communication overhead for the data shuffling problem. We completely characterize the information theoretic trade-off for @math , and @math workers, for any value of storage capacity, and show that increasing the storage across workers can reduce the communication overhead by leveraging coding. We propose a novel and systematic data delivery and storage update strategy for each data shuffle iteration, which preserves the structural properties of the storage across the workers, and aids in minimizing the communication overhead in subsequent data shuffling iterations.", "We propose a unified coded framework for distributed computing with straggling servers, by introducing a tradeoff between \"latency of computation\" and \"load of communication\" for some linear computation tasks. We show that the coded scheme of [1]-[3] that repeats the intermediate computations to create coded multicasting opportunities to reduce communication load, and the coded scheme of [4], [5] that generates redundant intermediate computations to combat against straggling servers can be viewed as special instances of the proposed framework, by considering two extremes of this tradeoff: minimizing either the load of communication or the latency of computation individually. Furthermore, the latency-load tradeoff achieved by the proposed coded framework allows to systematically operate at any point on that tradeoff to perform distributed computing tasks. We also prove an information-theoretic lower bound on the latency-load tradeoff, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.", "Distributed machine learning algorithms that are widely run on modern large-scale computing platforms face several types of randomness, uncertainty and system “noise.” These include stragglers1, system failures, maintenance outages, and communication bottlenecks. In this work, we view distributed machine learning algorithms through a coding-theoretic lens, and show how codes can equip them with robustness against this system noise. Motivated by their importance and universality, we focus on two of the most basic building blocks of distributed learning algorithms: data shuffling and matrix multiplication. In data shuffling, we use codes to reduce communication bottlenecks: when a constant fraction of the data can be cached at each worker node, and n is the number of workers, coded shuffling reduces the communication cost by up to a factor Θ(n) over uncoded shuffling. For matrix multiplication, we use codes to alleviate the effects of stragglers, also known as the straggler problem. We show that if the number of workers is n, and the runtime of each subtask has an exponential tail, the optimal coded matrix multiplication is Θ(log n) times faster than the uncoded matrix multiplication or the optimal task replication scheme.", "We consider a wireless distributed computing system, in which multiple mobile users, connected wirelessly through an access point, collaborate to perform a computation task. In particular, users communicate with each other via the access point to exchange their locally computed intermediate computation results, which is known as data shuffling . We propose a scalable framework for this system, in which the required communication bandwidth for data shuffling does not increase with the number of users in the network. The key idea is to utilize a particular repetitive pattern of placing the data set (thus a particular repetitive pattern of intermediate computations), in order to provide the coding opportunities at both the users and the access point, which reduce the required uplink communication bandwidth from users to the access point and the downlink communication bandwidth from access point to users by factors that grow linearly with the number of users. We also demonstrate that the proposed data set placement and coded shuffling schemes are optimal (i.e., achieve the minimum required shuffling load) for both a centralized setting and a decentralized setting, by developing tight information-theoretic lower bounds.", "A promising research area that has recently emerged, is on how to use index coding to improve the communication efficiency in distributed computing systems, especially for data shuffling in iterative computations. In this paper, we posit that pliable index coding can offer a more efficient framework for data shuffling, as it can better leverage the many possible shuffling choices to reduce the number of transmissions. We theoretically analyze pliable index coding under data shuffling constraints, and design a hierarchical data-shuffling scheme that uses pliable coding as a component. We find benefits up to @math over index coding, where @math is the average number of workers caching a message, and @math , @math , and @math are the numbers of messages, workers, and cache size, respectively." ] }
1706.09985
2727990140
A rising topic in computational journalism is how to enhance the diversity in news served to subscribers to foster exploration behavior in news reading. Despite the success of preference learning in personalized news recommendation, their over-exploitation causes filter bubble that isolates readers from opposing viewpoints and hurts long-term user experiences with lack of serendipity. Since news providers can recommend neither opposite nor diversified opinions if unpopularity of these articles is surely predicted, they can only bet on the articles whose forecasts of click-through rate involve high variability (risks) or high estimation errors (uncertainties). We propose a novel Bayesian model of uncertainty-aware scoring and ranking for news articles. The Bayesian binary classifier models probability of success (defined as a news click) as a Beta-distributed random variable conditional on a vector of the context (user features, article features, and other contextual features). The posterior of the contextual coefficients can be computed efficiently using a low-rank version of Laplace's method via thin Singular Value Decomposition. Efficiencies in personalized targeting of exceptional articles, which are chosen by each subscriber in test period, are evaluated on real-world news datasets. The proposed estimator slightly outperformed existing training and scoring algorithms, in terms of efficiency in identifying successful outliers.
Beta-binomial-logit models have been used for robust classification whereas their over-dispersion has been supplied by not a regression formula but by a scholar hyperparameter (e.g., @cite_31 ). Contextual-risk models have been used in regression tasks such as Gaussian Process (GP) regression with input-dependent variances @cite_6 , while have been uncommon in classification tasks. Our derivation of the custom Laplace approximation and marginal likelihood are based on the techniques in GP classification @cite_38 , for which Expectation Propagation (EP; @cite_16 ) is also applicable whereas we avoided too complicated formulas of EP. Nonparametric conditional density estimation (e.g., @cite_23 @cite_7 ) naturally introduces input-dependent noises while simpler forms are preferable in our task. @math -logistic regression @cite_34 is another robust classifier and Bayesian estimation of robust classifiers produces robust credible intervals (e.g., @cite_36 ), while their risks do not depend on inputs. Overall, to the best of our knowledge, we provide the most parsimonious Bayesian classifier that suits recommendation of outlying items based on input-dependent variabilities and uncertainties.
{ "cite_N": [ "@cite_38", "@cite_7", "@cite_36", "@cite_6", "@cite_34", "@cite_23", "@cite_31", "@cite_16" ], "mid": [ "", "2607444295", "1995219511", "2170078560", "", "2154036191", "2221087461", "1934021597" ], "abstract": [ "", "", "Summary In this paper we propose, survey and compare some classes of probability densities that may be used to represent partial prior information, to model either prior ignorance or Bayesian sensitivity analysis. We distinguish two types of models appropriate for two different situations: near ignorance models which are suitable in problems where there is little prior information, and neighbourhood models, which can be used to 'robustify' a strict Bayesian analysis in problems where there is substantial prior information about location. We argue that, especially for the first situation, a reasonable class of prior densities is not the same as a class of reasonable prior densities. We discuss various desiderata for a 'reasonable' class, including coherence and sensible dependence of inferences on sample size. The translation invariant models studied here are classes of conjugate priors, classes of double exponential densities and a neighbourhood of the uniform prior. Of the neighbourhood models we examine examples of E-contamination neighbourhoods (previously studied by Huber, Berger and Berliner) and intervals of measures (DeRobertis and Hartigan). We illustrate the models in the simple problem of constructing credible intervals for an unknown normal mean. Of the models studied in detail, a translation-invariant class of double exponential priors is favoured for modelling little prior information, and a type of interval of measures seems most suitable for robust Bayesian analysis.", "Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.", "", "We introduce a new nonlinear model for classification, in which we model the joint distribution of response variable, y, and covariates, x, non-parametrically using Dirichlet process mixtures. We keep the relationship between y and x linear within each component of the mixture. The overall relationship becomes nonlinear if the mixture contains more than one component, with different regression coefficients. We use simulated data to compare the performance of this new approach to alternative methods such as multinomial logit (MNL) models, decision trees, and support vector machines. We also evaluate our approach on two classification problems: identifying the folding class of protein sequences and detecting Parkinson's disease. Our model can sometimes improve predictive accuracy. Moreover, by grouping observations into sub-populations (i.e., mixture components), our model can sometimes provide insight into hidden structure in the data.", "A Beta-Binomial-Logit model is a Beta-Binomial model with covariate information incorporated via a logistic regression. Posterior propriety of a Bayesian Beta-Binomial-Logit model can be data-dependent for improper hyper-prior distributions. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies.", "This paper presents a new deterministic approximation technique in Bayesian networks. This method, \"Expectation Propagation,\" unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be donvincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers." ] }
1706.09985
2727990140
A rising topic in computational journalism is how to enhance the diversity in news served to subscribers to foster exploration behavior in news reading. Despite the success of preference learning in personalized news recommendation, their over-exploitation causes filter bubble that isolates readers from opposing viewpoints and hurts long-term user experiences with lack of serendipity. Since news providers can recommend neither opposite nor diversified opinions if unpopularity of these articles is surely predicted, they can only bet on the articles whose forecasts of click-through rate involve high variability (risks) or high estimation errors (uncertainties). We propose a novel Bayesian model of uncertainty-aware scoring and ranking for news articles. The Bayesian binary classifier models probability of success (defined as a news click) as a Beta-distributed random variable conditional on a vector of the context (user features, article features, and other contextual features). The posterior of the contextual coefficients can be computed efficiently using a low-rank version of Laplace's method via thin Singular Value Decomposition. Efficiencies in personalized targeting of exceptional articles, which are chosen by each subscriber in test period, are evaluated on real-world news datasets. The proposed estimator slightly outperformed existing training and scoring algorithms, in terms of efficiency in identifying successful outliers.
Humans exhibit systematically predictable behaviors in the face of uncertainty. One reliable observation is that desire to avoid monetary loss is a strong incentive for exploration (e.g., win-stay lose-shift algorithm @cite_35 , prospect theory @cite_33 , loss aversion @cite_5 , and regulatory focus theory @cite_24 @cite_37 ) and compensating money works as an incentive @cite_4 . Users do not lose money, however, in online news service when they read a narrow range of articles. Even in mental level, it is uncertain whether reading merely one-sided opinions lets users feel pains. Another observation is that diminishing return for the same type of stimulus naturally leads exploration (e.g., @cite_11 , variety-seeking behavior in marketing @cite_18 @cite_41 ) to maintain the Optimum Stimulation Level (OSL) @cite_20 . Diminishing return has already been exploited in diversified recommendation (e.g., linear submodular bandits @cite_2 ), while we already discussed the insufficient power of diversification for surely unpopular articles.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_18", "@cite_4", "@cite_33", "@cite_41", "@cite_24", "@cite_2", "@cite_5", "@cite_20", "@cite_11" ], "mid": [ "1998498767", "", "2047080367", "", "2133469585", "", "2109844556", "", "2157156222", "2097609137", "1984953787" ], "abstract": [ "Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.", "", "The way in which consumers seek variety has been shown to follow certain patterns. These patterns are a function of how much variety individuals look for and how consistent they are in this search for variety. If such patterns do indeed, represent consumption characteristics, can they be used to segment individuals into clusters? Moreover, can each segment be then characterised by a demographic profile? These two issues are critically tied together. The formation of homogenous segments is of significant concern to a manager looking for effective ways to target consumers. Nevertheless, unless this segment is identifiable and accessible by a manager in a practical and implementable way, the strategy is of little practical use. These, then are the issues we address in this paper. We show that consumers can indeed be segmented broadly into general patterns based on variety-seeking behaviour, and then identified by certain characteristics allowing for a tailored targeting strategy.", "", "This paper presents a critique of expected utility theory as a descriptive model of decision making under risk, and develops an alternative model, called prospect theory. Choices among risky prospects exhibit several pervasive effects that are inconsistent with the basic tenets of utility theory. In particular, people underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty. This tendency, called the certainty effect, contributes to risk aversion in choices involving sure gains and to risk seeking in choices involving sure losses. In addition, people generally discard components that are shared by all prospects under consideration. This tendency, called the isolation effect, leads to inconsistent preferences when the same choice is presented in different forms. An alternative theory of choice is developed, in which value is assigned to gains and losses rather than to final assets and in which probabilities are replaced by decision weights. The value function is normally concave for gains, commonly convex for losses, and is generally steeper for losses than for gains. Decision weights are generally lower than the corresponding probabilities, except in the range of low prob- abilities. Overweighting of low probabilities may contribute to the attractiveness of both insurance and gambling. EXPECTED UTILITY THEORY has dominated the analysis of decision making under risk. It has been generally accepted as a normative model of rational choice (24), and widely applied as a descriptive model of economic behavior, e.g. (15, 4). Thus, it is assumed that all reasonable people would wish to obey the axioms of the theory (47, 36), and that most people actually do, most of the time. The present paper describes several classes of choice problems in which preferences systematically violate the axioms of expected utility theory. In the light of these observations we argue that utility theory, as it is commonly interpreted and applied, is not an adequate descriptive model and we propose an alternative account of choice under risk. 2. CRITIQUE", "", "The classic answer to what makes a decision good concerns outcomes. A good decision has high outcome benefits (it is worthwhile) and low outcome costs (it is worth it). I propose that, independent of outcomes or value from worth, people experience a regulatory fit when they use goal pursuit means th...", "", "The roles of loss aversion and inhibition among alternatives are examined in models of the similarity, compromise, and attraction effects that arise in choices among 3 alternatives differing on 2 attributes. R. M. Roe, J. R. Busemeyer, and J. T. Townsend (2001) have proposed a linear model in which effects previously attributed to loss aversion (A. Tversky & D. Kahneman, 1991) arise from attention switching between attributes and similarity-dependent inhibitory interactions among alternatives. However, there are several reasons to maintain loss aversion in a theory of choice. In view of this, an alternative theory is proposed, integrating loss aversion and attention switching into a nonlinear model (M. Usher & J. L. McClelland, 2001) that relies on inhibition independent of similarity among alternatives. The model accounts for the 3 effects and makes testable predictions contrasting with those of the (2001) model.", "Two studies are reported that examine the relationships between optimum stimulation level (OSL), selected personality traits, demographic variables, and exploratory behavior in the consumer context. The results show several significant correlations between OSL and the other variables examined. Research and managerial implications of the results are outlined.", "Abstract Studies have concluded that cost of search and prior knowledge are two major influences on search. What is not known is whether the effect of search cost is the same for consumers of differing knowledge levels, particularly when consumers must wait to retrieve information. This paper studies the impact on search of different types of search cost: cognitive search cost, operationalized using prior category knowledge; and external search cost, operationalized using waiting times to obtain information. We focus on the prior knowledge × waiting time interaction effect on search in a computer search environment. We find that knowledge facilitates search, but only in low waiting time conditions. High knowledge consumers augment their search with more complex and cognitively demanding sources and patterns of information acquisition. But the search of low knowledge consumers remains largely unaffected. Implications of the study's findings are discussed. PsycINFO classification : 3900; 3920; 3940; 2320;" ] }
1706.10172
2731383018
In this paper we investigate the behavioural differences between mobile phone customers with prepaid and postpaid subscriptions. Our study reveals that (a) postpaid customers are more active in terms of service usage and (b) there are strong structural correlations in the mobile phone call network as connections between customers of the same subscription type are much more frequent than those between customers of different subscription types. Based on these observations we provide methods to detect the subscription type of customers by using information about their personal call statistics, and also their egocentric networks simultaneously. The key of our first approach is to cast this classification problem as a problem of graph labelling, which can be solved by max-flow min-cut algorithms. Our experiments show that, by using both user attributes and relationships, the proposed graph labelling approach is able to achieve a classification accuracy of @math , which outperforms by @math supervised learning methods using only user attributes. In our second problem we aim to infer the subscription type of customers of external operators. We propose via approximate methods to solve this problem by using node attributes, and a two-ways indirect inference method based on observed homophilic structural correlations. Our results have straightforward applications in behavioural prediction and personal marketing.
Our work is motivated by the work of @cite_7 in which network communities are identified by using both user attributes and the structure of the social network. However, one important distinction of the present work is that in @cite_7 the authors aimed to cluster nodes with similar attributes into some apriori unknown number of communities, whereas we aim to find discriminating attributes of nodes and assign them to two pre-defined communities. In addition, our approach is based on graph labelling and solved by simple graph algorithms, whereas @cite_7 is based on a generative model optimised by block-coordinate descent. Other related works on community detection with user attributes include @cite_10 @cite_1 @cite_17 . However, all of them focus on clustering nodes in a network into priori unknown partitions or communities, while none of them address the classification of nodes into pre-defined classes as here. A related classification problem which has been well studied in the literature is the inference of user demographics such as nationality, gender and age from social and mobile call networks @cite_8 @cite_14 @cite_6 @cite_12 @cite_13 . While interesting, most of the previous work exploited only node attributes some of which are computed from local connections.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "", "2394682549", "2012921801", "2065130322", "2953197232", "2292672935", "1987498565", "", "2111315907" ], "abstract": [ "", "Social media platforms have become a major gateway to receive and analyze public opinions. Understanding users can provide invaluable context information of their social media posts and significantly improve traditional opinion analysis models. Demographic attributes, such as ethnicity, gender, age, among others, have been extensively applied to characterize social media users. While studies have shown that user groups formed by demographic attributes can have coherent opinions towards political issues, these attributes are often not explicitly coded by users through their profiles. Previous work has demonstrated the effectiveness of different user signals such as users’ posts and names in determining demographic attributes. Yet, these efforts mostly evaluate linguistic signals from users’ posts and train models from artificially balanced datasets. In this paper, we propose a comprehensive list of user signals: self-descriptions and posts aggregated from users’ friends and followers, users’ profile images, and users’ names. We provide a comparative study of these signals side-by-side in the tasks on inferring three major demographic attributes, namely ethnicity, gender, and age. We utilize a realistic unbalanced datasets that share similar demographic makeups in Twitter for training models and evaluation experiments. Our experiments indicate that self-descriptions provide the strongest signal for ethnicity and age inference and clearly improve the overall performance when combined with tweets. Profile images for gender inference have the highest precision score with overall score close to the best result in our setting. This suggests that signals in self-descriptions and profile images have potentials to facilitate demographic attribute inferences in Twitter, and are promising for future investigation.", "Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.", "Demographics are widely used in marketing to characterize different types of customers. However, in practice, demographic information such as age, gender, and location is usually unavailable due to privacy and other reasons. In this paper, we aim to harness the power of big data to automatically infer users' demographics based on their daily mobile communication patterns. Our study is based on a real-world large mobile network of more than 7,000,000 users and over 1,000,000,000 communication records (CALL and SMS). We discover several interesting social strategies that mobile users frequently use to maintain their social connections. First, young people are very active in broadening their social circles, while seniors tend to keep close but more stable connections. Second, female users put more attention on cross-generation interactions than male users, though interactions between male and female users are frequent. Third, a persistent same-gender triadic pattern over one's lifetime is discovered for the first time, while more complex opposite-gender triadic patterns are only exhibited among young people. We further study to what extent users' demographics can be inferred from their mobile communications. As a special case, we formalize a problem of double dependent-variable prediction-inferring user gender and age simultaneously. We propose the WhoAmI method, a Double Dependent-Variable Factor Graph Model, to address this problem by considering not only the effects of features on gender age, but also the interrelation between gender and age. Our experiments show that the proposed WhoAmI method significantly improves the prediction accuracy by up to 10 compared with several alternative methods.", "With the rapid development of online social media, online shopping sites and cyber-physical systems, heterogeneous information networks have become increasingly popular and content-rich over time. In many cases, such networks contain multiple types of objects and links, as well as different kinds of attributes. The clustering of these objects can provide useful insights in many applications. However, the clustering of such networks can be challenging since (a) the attribute values of objects are often incomplete, which implies that an object may carry only partial attributes or even no attributes to correctly label itself; and (b) the links of different types may carry different kinds of semantic meanings, and it is a difficult task to determine the nature of their relative importance in helping the clustering for a given purpose. In this paper, we address these challenges by proposing a model-based clustering algorithm. We design a probabilistic model which clusters the objects of different types into a common hidden space, by using a user-specified set of attributes, as well as the links from different relations. The strengths of different types of links are automatically learned, and are determined by the given purpose of clustering. An iterative algorithm is designed for solving the clustering problem, in which the strengths of different types of links and the quality of clustering results mutually enhance each other. Our experimental results on real and synthetic data sets demonstrate the effectiveness and efficiency of the algorithm.", "Understanding the demographics of app users is crucial, for example, for app developers, who wish to target their advertisements more effectively. Our work addresses this need by studying the predictability of user demographics based on the list of a user's apps which is readily available to many app developers. We extend previous work on the problem on three frontiers: (1) We predict new demographics (age, race, and income) and analyze the most informative apps for four demographic attributes included in our analysis. The most predictable attribute is gender (82.3 accuracy), whereas the hardest to predict is income (60.3 accuracy). (2) We compare several dimensionality reduction methods for high-dimensional app data, finding out that an unsupervised method yields superior results compared to aggregating the apps at the app category level, but the best results are obtained simply by the raw list of apps. (3) We look into the effect of the training set size and the number of apps on the predictability and show that both of these factors have a large impact on the prediction accuracy. The predictability increases, or in other words, a user's privacy decreases, the more apps the user has used, but somewhat surprisingly, after 100 apps, the prediction accuracy starts to decrease.", "We examine the problem of identifying social circles, or sets of cohesive and mutually aware nodes surrounding an initial query set, in directed graphs where the complete graph is not known beforehand. This problem differs from local community mining, in that the query set defines the circle of interest. We explicitly handle edge direction, as in many cases relationships are not symmetric, and focus on the local context because many real-world graphs cannot be feasibly known. We outline several issues that are unique to this context, introduce a quality function to measure the value of including a particular node in an emerging social circle, and describe a greedy social circle discovery algorithm. We demonstrate the effectiveness of this approach on artificial benchmarks, large networks with topical community labels, and several real-world case studies.", "", "Graph clustering, also known as community detection, is a long-standing problem in data mining. However, with the proliferation of rich attribute information available for objects in real-world graphs, how to leverage structural and attribute information for clustering attributed graphs becomes a new challenge. Most existing works take a distance-based approach. They proposed various distance measures to combine structural and attribute information. In this paper, we consider an alternative view and propose a model-based approach to attributed graph clustering. We develop a Bayesian probabilistic model for attributed graphs. The model provides a principled and natural framework for capturing both structural and attribute aspects of a graph, while avoiding the artificial design of a distance measure. Clustering with the proposed model can be transformed into a probabilistic inference problem, for which we devise an efficient variational algorithm. Experimental results on large real-world datasets demonstrate that our method significantly outperforms the state-of-art distance-based attributed graph clustering method." ] }
1706.10172
2731383018
In this paper we investigate the behavioural differences between mobile phone customers with prepaid and postpaid subscriptions. Our study reveals that (a) postpaid customers are more active in terms of service usage and (b) there are strong structural correlations in the mobile phone call network as connections between customers of the same subscription type are much more frequent than those between customers of different subscription types. Based on these observations we provide methods to detect the subscription type of customers by using information about their personal call statistics, and also their egocentric networks simultaneously. The key of our first approach is to cast this classification problem as a problem of graph labelling, which can be solved by max-flow min-cut algorithms. Our experiments show that, by using both user attributes and relationships, the proposed graph labelling approach is able to achieve a classification accuracy of @math , which outperforms by @math supervised learning methods using only user attributes. In our second problem we aim to infer the subscription type of customers of external operators. We propose via approximate methods to solve this problem by using node attributes, and a two-ways indirect inference method based on observed homophilic structural correlations. Our results have straightforward applications in behavioural prediction and personal marketing.
Knowledge transfer from one network to another is also a hot topic in the field of machine learning and social network analysis. Most of these works focused on the inference of the structure, i.e. link prediction, using information from multiple networks. For example, inferred the friendship network from mobile call data @cite_4 and @cite_5 and @cite_2 predicted links across heterogeneous networks, which may only be partially visible. In addition, @cite_3 studied the problems of relationship classification across networks and @cite_0 inferred the anchor links between users in different social networks, i.e. the same user with different accounts. In contrast to most previous work, our study on cross-network inference focuses on the classification of nodes in the targeted network using only the connections between two networks.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_0", "@cite_2", "@cite_5" ], "mid": [ "", "2022867359", "2047532797", "2123235204", "2167467982" ], "abstract": [ "", "It is well known that different types of social ties have essentially different influence on people. However, users in online social networks rarely categorize their contacts into \"family\", \"colleagues\", or \"classmates\". While a bulk of research has focused on inferring particular types of relationships in a specific social network, few publications systematically study the generalization of the problem of inferring social ties over multiple heterogeneous networks. In this work, we develop a framework for classifying the type of social relationships by learning across heterogeneous networks. The framework incorporates social theories into a factor graph model, which effectively improves the accuracy of inferring the type of social relationships in a target network by borrowing knowledge from a different source network. Our empirical study on five different genres of networks validates the effectiveness of the proposed framework. For example, by leveraging information from a coauthor network with labeled advisor-advisee relationships, the proposed framework is able to obtain an F1-score of 90 (8-28 improvements over alternative methods) for inferring manager-subordinate relationships in an enterprise email network.", "Online social networks can often be represented as heterogeneous information networks containing abundant information about: who, where, when and what. Nowadays, people are usually involved in multiple social networks simultaneously. The multiple accounts of the same user in different networks are mostly isolated from each other without any connection between them. Discovering the correspondence of these accounts across multiple social networks is a crucial prerequisite for many interesting inter-network applications, such as link recommendation and community analysis using information from multiple networks. In this paper, we study the problem of anchor link prediction across multiple heterogeneous social networks, i.e., discovering the correspondence among different accounts of the same user. Unlike most prior work on link prediction and network alignment, we assume that the anchor links are one-to-one relationships (i.e., no two edges share a common endpoint) between the accounts in two social networks, and a small number of anchor links are known beforehand. We propose to extract heterogeneous features from multiple heterogeneous networks for anchor link prediction, including user's social, spatial, temporal and text information. Then we formulate the inference problem for anchor links as a stable matching problem between the two sets of user accounts in two different networks. An effective solution, MNA (Multi-Network Anchoring), is derived to infer anchor links w.r.t. the one-to-one constraint. Extensive experiments on two real-world heterogeneous social networks show that our MNA model consistently outperform other commonly-used baselines on anchor link prediction.", "The problem of link prediction has been studied extensively in literature. There are various versions of the link prediction problem link existence problem, link removal problem, predicting edge weights over time etc. In this paper we describe a new type of link prediction problem called the Inter-network link-prediction problem where the task is to predict links different networks. Thus given a set of nodes which participate in multiple networks the task is to determine if one can predict the edges that occur in one network by only using node attribute and edge information from other networks. We use insights from theories of evolution of social communication networks and the MTML framework to derive models which can be used to make link predictions across networks. For the experiments data from different of social networks from a Massively Multiplayer Online Role Playing Game (MMORPG) is used.", "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links." ] }
1706.10188
2729460574
The influence maximization is the problem of finding a set of social network users, called influencers, that can trigger a large cascade of propagation. Influencers are very beneficial to make a marketing campaign goes viral through social networks for example. In this paper, we propose an influence measure that combines many influence indicators. Besides, we consider the reliability of each influence indicator and we present a distance-based process that allows to estimate the reliability of each indicator. The proposed measure is defined under the framework of the theory of belief functions. Furthermore, the reliability-based influence measure is used with an influence maximization model to select a set of users that are able to maximize the influence in the network. Finally, we present a set of experiments on a dataset collected from Twitter. These experiments show the performance of the proposed solution in detecting social influencers with good quality.
The influence maximization is a relatively new research problem. Its main purpose is to find a set of @math social users that are able to trigger a large cascade of propagation through the word of mouth effect. Since its introduction, many researchers have turned to this problem and several solutions are introduced in the literature @cite_18 @cite_0 @cite_8 @cite_9 @cite_22 . In this section, we present some of these works.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_8", "@cite_9", "@cite_0" ], "mid": [ "2061820396", "2478845108", "1991635064", "1992250165", "1654194294" ], "abstract": [ "Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.", "In this paper, we propose a new data based model for influence maximization in online social networks. We use the theory of belief functions to overcome the data imperfection problem. Besides, the proposed model searches to detect influencer users that adopt a positive opinion about the product, the idea, etc, to be propagated. Moreover, we present some experiments to show the performance of our model.", "Influence maximization is the problem of finding a set of users in a social network, such that by targeting this set, one maximizes the expected spread of influence in the network. Most of the literature on this topic has focused exclusively on the social graph, overlooking historical data, i.e., traces of past action propagations. In this paper, we study influence maximization from a novel data-based perspective. In particular, we introduce a new model, which we call credit distribution, that directly leverages available propagation traces to learn how influence flows in the network and uses this to estimate expected influence spread. Our approach also learns the different levels of influence-ability of users, and it is time-aware in the sense that it takes the temporal nature of influence into account. We show that influence maximization under the credit distribution model is NP-hard and that the function that defines expected spread under our model is submodular. Based on these, we develop an approximation algorithm for solving the influence maximization problem that at once enjoys high accuracy compared to the standard approach, while being several orders of magnitude faster and more scalable.", "Identifying influential nodes that lead to faster and wider spreading in complex networks is of theoretical and practical significance. The degree centrality method is very simple but of little relevance. Global metrics such as betweenness centrality and closeness centrality can better identify influential nodes, but are incapable to be applied in large-scale networks due to the computational complexity. In order to design an effective ranking method, we proposed a semi-local centrality measure as a tradeoff between the low-relevant degree centrality and other time-consuming measures. We use the Susceptible–Infected–Recovered (SIR) model to evaluate the performance by using the spreading rate and the number of infected nodes. Simulations on four real networks show that our method can well identify influential nodes.", "We study the problem of maximizing the expected spread of an innovation or behavior within a social network, in the presence of “word-of-mouth” referral. Our work builds on the observation that individuals’ decisions to purchase a product or adopt an innovation are strongly influenced by recommendations from their friends and acquaintances. Understanding and leveraging this influence may thus lead to a much larger spread of the innovation than the traditional view of marketing to individuals in isolation. In this paper, we define a natural and general model of influence propagation that we term the decreasing cascade model, generalizing models used in the sociology and economics communities. In this model, as in related ones, a behavior spreads in a cascading fashion according to a probabilistic rule, beginning with a set of initially “active” nodes. We study the target set selection problem: we wish to choose a set of individuals to target for initial activation, such that the cascade beginning with this active set is as large as possible in expectation. We show that in the decreasing cascade model, a natural greedy algorithm is a 1-1 e-e approximation for selecting a target set of size k." ] }
1706.10200
2727550704
Network creation games investigate complex networks from a game-theoretic point of view. Based on the original model by [PODC'03] many variants have been introduced. However, almost all versions have the drawback that edges are treated uniformly, i.e. every edge has the same cost and that this common parameter heavily influences the outcomes and the analysis of these games. We propose and analyze simple and natural parameter-free network creation games with non-uniform edge cost. Our models are inspired by social networks where the cost of forming a link is proportional to the popularity of the targeted node. Besides results on the complexity of computing a best response and on various properties of the sequential versions, we show that the most general version of our model has constant Price of Anarchy. To the best of our knowledge, this is the first proof of a constant Price of Anarchy for any network creation game.
Removing the parameter @math by restricting the agents to edge swaps was proposed and analyzed in @cite_6 @cite_20 . The obtained results are similar, e.g. the best known upper bound on the PoA is @math , there cannot exist a potential function @cite_17 and computing a best response is -hard. However, allowing only swaps leads to the unnatural effects that the number of edges cannot change and that the sequential version heavily depends on the initial network.
{ "cite_N": [ "@cite_20", "@cite_6", "@cite_17" ], "mid": [ "2080089319", "161648061", "2161199320" ], "abstract": [ "We study a natural network creation game, in which each node locally tries to minimize its local diameter or its local average distance to other nodes by swapping one incident edge at a time. The central question is what structure the resulting equilibrium graphs have, in particular, how well they globally minimize diameter. For the local-average-distance version, we prove an upper bound of @math , a lower bound of 3, and a tight bound of exactly 2 for trees, and give evidence of a general polylogarithmic upper bound. For the local-diameter version, we prove a lower bound of @math and a tight upper bound of 3 for trees. The same bounds apply, up to constant factors, to the price of anarchy. Our network creation games are closely related to the previously studied unilateral network creation game. The main difference is that our model has no parameter @math for the link creation cost, so our results effectively apply for all values of @math without additional effort; furthe...", "We introduce and study the concept of an asymmetric swap-equilibrium for network creation games. A graph where every edge is owned by one of its endpoints is called to be in asymmetric swap-equilibrium, if no vertex v can delete its own edge v,w and add a new edge v,w′ and thereby decrease the sum of distances from v to all other vertices. This equilibrium concept generalizes and unifies some of the previous equilibrium concepts for network creation games. While the structure and the quality of equilibrium networks is still not fully understood, we provide further (partial) insights for this open problem. As the two main results, we show that (1) every asymmetric swap-equilibrium has at most one (non-trivial) 2-edge-connected component, and (2) we show a logarithmic upper bound on the diameter of an asymmetric swap-equilibrium for the case that the minimum degree of the unique 2-edge-connected component is at least ne, for @math . Due to the generalizing property of asymmetric swap equilibria, these results hold for several equilibrium concepts that were previously studied. Along the way, we introduce a node-weighted version of the network creation games, which is of independent interest for further studies of network creation games.", "We initiate the study of game dynamics in the Sum Basic Network Creation Game, which was recently introduced by [SPAA'10]. In this game players are associated to vertices in a graph and are allowed to \"swap\" edges, that is to remove an incident edge and insert a new incident edge. By performing such moves, every player tries to minimize her connection cost, which is the sum of distances to all other vertices. When played on a tree, we prove that this game admits an ordinal potential function, which implies guaranteed convergence to a pure Nash Equilibrium. We show a cubic upper bound on the number of steps needed for any improving response dynamic to converge to a stable tree and propose and analyse a best response dynamic, where the players having the highest cost are allowed to move. For this dynamic we show an almost tight linear upper bound for the convergence speed. Furthermore, we contrast these positive results by showing that, when played on general graphs, this game allows best response cycles. This implies that there cannot exist an ordinal potential function and that fundamentally different techniques are required for analysing this case. For computing a best response we show a similar contrast: On the one hand we give a linear-time algorithm for computing a best response on trees even if players are allowed to swap multiple edges at a time. On the other hand we prove that this task is NP-hard even on simple general graphs, if more than one edge can be swapped at a time. The latter addresses a proposal by ." ] }
1706.10200
2727550704
Network creation games investigate complex networks from a game-theoretic point of view. Based on the original model by [PODC'03] many variants have been introduced. However, almost all versions have the drawback that edges are treated uniformly, i.e. every edge has the same cost and that this common parameter heavily influences the outcomes and the analysis of these games. We propose and analyze simple and natural parameter-free network creation games with non-uniform edge cost. Our models are inspired by social networks where the cost of forming a link is proportional to the popularity of the targeted node. Besides results on the complexity of computing a best response and on various properties of the sequential versions, we show that the most general version of our model has constant Price of Anarchy. To the best of our knowledge, this is the first proof of a constant Price of Anarchy for any network creation game.
Several versions for augmenting the NCG with locality have been proposed and analyzed recently. It was shown that the PoA may deteriorate heavily if agents only know their local neighborhood or only a shortest path tree of the network @cite_8 @cite_24 . In contrast, a global view with a restriction to only local edge-purchases yields only a moderate increase of the PoA @cite_12 .
{ "cite_N": [ "@cite_24", "@cite_12", "@cite_8" ], "mid": [ "79855918", "2949981324", "" ], "abstract": [ "Network creation games model the autonomous formation of an interconnected system of selfish users. In particular, when the network will serve as a digital communication infrastructure, each user is identified by a node of the network, and contributes to the build-up process by strategically balancing between her building cost (i.e., the number of links she personally activates in the network) and her usage cost (i.e., some function of the distance in the sought network to the other players). When the corresponding game is analyzed, the generally adopted assumption is that players have a common and complete information about the evolving network topology, which is quite unrealistic though, due to the massive size this may have in practice. In this paper, we thus relax this assumption, by instead letting the players have only a partial knowledge of the network. To this respect, we make use of three popular traceroute-based knowledge models used in network discovering (i.e., the activity of reconstructing the topology of an unknown network through queries at its nodes), namely: (i) distance vector, (ii) shortest-path tree view, and (iii) layered view. For all these models, we provide exhaustive answers to the canonical algorithmic game theoretic questions: convergence, computational complexity for a player of selecting a best response, and tight bounds to the price of anarchy, all of them computed w.r.t. a suitable (and unifying) equilibrium concept.", "We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (, PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bil , SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy-changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the price of anarchy.", "" ] }
1706.09628
2727314799
Autonomous vehicles are slowly becoming reality thanks to the efforts of many academic and industrial organizations. Due to the complexity of the software powering these systems and the dynamicity of the development processes, an architectural solution capable of supporting long-term evolution and maintenance is required. Continuous Experimentation (CE) is an already increasingly adopted practice in software-intensive web-based software systems to steadily improve them over time. CE allows organizations to steer the development efforts by basing decisions on data collected about the system in its field of application. Despite the advantages of Continuous Experimentation, this practice is only rarely adopted in cyber-physical systems and in the automotive domain. Reasons for this include the strict safety constraints and the computational capabilities needed from the target systems. In this work, a concept for using Continuous Experimentation for resource-constrained platforms like a self-driving vehicle is outlined.
Several works are present in literature focusing on Continuous Experimentation. One of these is @cite_1 , which describes a CE model that takes into account the roles, tasks, infrastructure and information artifacts involved by this practice. In this paper, the authors developed and extended their model, validating it against the results of two empirical case studies conducted in startup companies.
{ "cite_N": [ "@cite_1" ], "mid": [ "2308158103" ], "abstract": [ "Abstract Context: Development of software-intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process." ] }
1706.09628
2727314799
Autonomous vehicles are slowly becoming reality thanks to the efforts of many academic and industrial organizations. Due to the complexity of the software powering these systems and the dynamicity of the development processes, an architectural solution capable of supporting long-term evolution and maintenance is required. Continuous Experimentation (CE) is an already increasingly adopted practice in software-intensive web-based software systems to steadily improve them over time. CE allows organizations to steer the development efforts by basing decisions on data collected about the system in its field of application. Despite the advantages of Continuous Experimentation, this practice is only rarely adopted in cyber-physical systems and in the automotive domain. Reasons for this include the strict safety constraints and the computational capabilities needed from the target systems. In this work, a concept for using Continuous Experimentation for resource-constrained platforms like a self-driving vehicle is outlined.
Several articles related to CE report the advancements and characteristics of the experimentation processes and platforms in industrial settings. An example of these works is @cite_8 that described the experimental setting at Google Inc. where, in order to improve the experimentation process and execution, experiments that involve independent factors are overlapped. Further examples are @cite_3 , that described Microsoft Bing's own solution to run over 200 experiments concurrently'', and Amatriain @cite_10 , that outlined Netflix's approach to experimentation.
{ "cite_N": [ "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2051199128", "2112508839", "1975566260" ], "abstract": [ "Since the Netflix $1 million Prize, announced in 2006, Netflix has been known for having personalization at the core of our product. Our current product offering is nowadays focused around instant video streaming, and our data is now many orders of magnitude larger. Not only do we have many more users in many more countries, but we also receive many more streams of data. Besides the ratings, we now also use information such as what our members play, browse, or search. In this invited talk I will discuss the different approaches we follow to deal with these large streams of user data in order to extract information for personalizing our service. I will describe some of the machine learning models used, and their application in the service. I will also describe our data-driven approach to innovation that combines rapid offline explorations as well as online A B testing. This approach enables us to convert user information into real and measurable business value.", "Web-facing companies, including Amazon, eBay, Etsy, Facebook, Google, Groupon, Intuit, LinkedIn, Microsoft, Netflix, Shop Direct, StumbleUpon, Yahoo, and Zynga use online controlled experiments to guide product development and accelerate innovation. At Microsoft's Bing, the use of controlled experiments has grown exponentially over time, with over 200 concurrent experiments now running on any given day. Running experiments at large scale requires addressing multiple challenges in three areas: cultural organizational, engineering, and trustworthiness. On the cultural and organizational front, the larger organization needs to learn the reasons for running controlled experiments and the tradeoffs between controlled experiments and other methods of evaluating ideas. We discuss why negative experiments, which degrade the user experience short term, should be run, given the learning value and long-term benefits. On the engineering side, we architected a highly scalable system, able to handle data at massive scale: hundreds of concurrent experiments, each containing millions of users. Classical testing and debugging techniques no longer apply when there are billions of live variants of the site, so alerts are used to identify issues rather than relying on heavy up-front testing. On the trustworthiness front, we have a high occurrence of false positives that we address, and we alert experimenters to statistical interactions between experiments. The Bing Experimentation System is credited with having accelerated innovation and increased annual revenues by hundreds of millions of dollars, by allowing us to find and focus on key ideas evaluated through thousands of controlled experiments. A 1 improvement to revenue equals more than $10M annually in the US, yet many ideas impact key metrics by 1 and are not well estimated a-priori. The system has also identified many negative features that we avoided deploying, despite key stakeholders' early excitement, saving us similar large amounts.", "At Google, experimentation is practically a mantra; we evaluate almost every change that potentially affects what our users experience. Such changes include not only obvious user-visible changes such as modifications to a user interface, but also more subtle changes such as different machine learning algorithms that might affect ranking or content selection. Our insatiable appetite for experimentation has led us to tackle the problems of how to run more experiments, how to run experiments that produce better decisions, and how to run them faster. In this paper, we describe Google's overlapping experiment infrastructure that is a key component to solving these problems. In addition, because an experiment infrastructure alone is insufficient, we also discuss the associated tools and educational processes required to use it effectively. We conclude by describing trends that show the success of this overall experimental environment. While the paper specifically describes the experiment system and experimental processes we have in place at Google, we believe they can be generalized and applied by any entity interested in using experimentation to improve search engines and other web applications." ] }
1706.09799
2729046720
Automated metrics such as BLEU are widely used in the machine translation literature. They have also been used recently in the dialogue community for evaluating dialogue response generation. However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting. Task-oriented dialogue responses are expressed on narrower domains and exhibit lower diversity. It is thus reasonable to think that these automated metrics would correlate well with human judgment in the task-oriented setting where the generation task consists of translating dialogue acts into a sentence. We conduct an empirical study to confirm whether this is the case. Our findings indicate that these automated metrics have stronger correlation with human judgments in the task-oriented setting compared to what has been observed in the non task-oriented setting. We also observe that these metrics correlate even better for datasets which provide multiple ground truth reference sentences. In addition, we show that some of the currently available corpora for task-oriented language generation can be solved with simple models and advocate for more challenging datasets.
Most of the work described so far has been done in the non task-oriented dialogue setting as there has been prior work indicating that automated metrics do not correlate well with humans in that setting. There has not yet been any empirical validation that these conclusions also apply to the task oriented setting. Research in the task oriented setting has mostly made use of automated metrics such as BLEU and human evaluation @cite_3 @cite_21 @cite_12 .
{ "cite_N": [ "@cite_21", "@cite_12", "@cite_3" ], "mid": [ "2427764808", "2410983263", "" ], "abstract": [ "Natural language generation plays a critical role in spoken dialogue systems. We present a new approach to natural language generation for task-oriented dialogue using recurrent neural networks in an encoder-decoder framework. In contrast to previous work, our model uses both lexicalized and delexicalized components i.e. slot-value pairs for dialogue acts, with slots and corresponding values aligned together. This allows our model to learn from all available data including the slot-value pairing, rather than being restricted to delexicalized slots. We show that this helps our model generate more natural sentences with better grammar. We further improve our model's performance by transferring weights learnt from a pretrained sentence auto-encoder. Human evaluation of our best-performing model indicates that it generates sentences which users find more appealing.", "Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.", "" ] }
1706.09597
2732671178
In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The network includes both system dynamics and cost models, used for optimal control based planning. PI-Net is fully differentiable, learning both dynamics and cost models end-to-end by back-propagation and stochastic gradient descent. Because of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize to unseen states thanks to planning, it can be applied to continuous control tasks, and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by imitation learning, can mimic control demonstrations for two simulated problems; a linear system and a pendulum swing-up problem. We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations.
Generally used optimal controller, linear quadratic regulator (LQR), is also differentiable and Ref. @cite_5 employs this insight to original cost models to improve short-term MPC performance. The main advantage of the path integral control over (iterative-)LQR is that we do not require a linear and quadratic approximation of non-linear dynamics and cost model. In order to differentiate iLQR with non-linear models by back-propagation, we must iteratively differentiate the functions during preceding forward pass, making the backward pass very complicated.
{ "cite_N": [ "@cite_5" ], "mid": [ "2963165111" ], "abstract": [ "Model predictive control (MPC) is a popular control method that has proved effective for robotics, among other fields. MPC performs re-planning at every time step. Re-planning is done with a limited horizon per computational and real-time constraints and often also for robustness to potential model errors. However, the limited horizon leads to suboptimal performance. In this work, we consider the iterative learning setting, where the same task can be repeated several times, and propose a policy improvement scheme for MPC. The main idea is that between executions we can, offline, run MPC with a longer horizon, resulting in a hindsight plan. To bring the next real-world execution closer to the hindsight plan, our approach learns to re-shape the original cost function with the goal of satisfying the following property: short horizon planning (as realistic during real executions) with respect to the shaped cost should result in mimicking the hindsight plan. This effectively consolidates long-term reasoning into the short-horizon planning. We empirically evaluate our approach in contact-rich manipulation tasks both in simulated and real environments, such as peg insertion by a real PR2 robot." ] }
1706.09597
2732671178
In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The network includes both system dynamics and cost models, used for optimal control based planning. PI-Net is fully differentiable, learning both dynamics and cost models end-to-end by back-propagation and stochastic gradient descent. Because of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize to unseen states thanks to planning, it can be applied to continuous control tasks, and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by imitation learning, can mimic control demonstrations for two simulated problems; a linear system and a pendulum swing-up problem. We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations.
Policy Improvement with Path Integrals @cite_14 and Inverse Path Integral Inverse Reinforcement Learning @cite_6 are policy search approaches based on the path integral control framework, which train a parameterized control policy via reinforcement learning and imitation learning, respectively. These methods have been succeeded to train policies for complex robotics tasks, however, they assume trajectory-centric policy representation such as dynamic movement primitives @cite_25 ; such the policy is less generalizable for settings (e.g., different initial states).
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_6" ], "mid": [ "1925816294", "2110304639", "1994648061" ], "abstract": [ "With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.", "Presents an approach to movement planning, on-line trajectory modification, and imitation learning by representing movement plans based on a set of nonlinear differential equations with well-defined attractor dynamics. The resultant movement plan remains an autonomous set of nonlinear differential equations that forms a control policy (CP) which is robust to strong external perturbations and that can be modified on-line by additional perceptual variables. We evaluate the system with a humanoid robot simulation and an actual humanoid robot. Experiments are presented for the imitation of three types of movements: reaching movements with one arm, drawing movements of 2-D patterns, and tennis swings. Our results demonstrate (a) that multi-joint human movements can be encoded successfully by the CPs, (b) that a learned movement policy can readily be reused to produce robust trajectories towards different targets, (c) that a policy fitted for one particular target provides a good predictor of human reaching movements towards neighboring targets, and (d) that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar.", "We present an approach to learning objective functions for robotic manipulation based on inverse reinforcement learning. Our path integral inverse reinforcement learning algorithm can deal with high-dimensional continuous state-action spaces, and only requires local optimality of demonstrated trajectories. We use L1 regularization in order to achieve feature selection, and propose an efficient algorithm to minimize the resulting convex objective function. We demonstrate our approach by applying it to two core problems in robotic manipulation. First, we learn a cost function for redundancy resolution in inverse kinematics. Second, we use our method to learn a cost function over trajectories, which is then used in optimization-based motion planning for grasping and manipulation tasks. Experimental results show that our method outperforms previous algorithms in high-dimensional settings." ] }
1706.09597
2732671178
In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The network includes both system dynamics and cost models, used for optimal control based planning. PI-Net is fully differentiable, learning both dynamics and cost models end-to-end by back-propagation and stochastic gradient descent. Because of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize to unseen states thanks to planning, it can be applied to continuous control tasks, and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by imitation learning, can mimic control demonstrations for two simulated problems; a linear system and a pendulum swing-up problem. We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations.
Deep Deterministic Policy Gradient @cite_12 , A3C @cite_21 , Trust Region Optimization @cite_28 , Guided Policy Search @cite_24 and Path Integral Guided Policy Search @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_21", "@cite_24", "@cite_12" ], "mid": [ "2951456741", "2949608212", "2260756217", "2964161785", "2173248099" ], "abstract": [ "We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies.", "We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }
1706.09876
2732082028
Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detector (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD.
The CNN based face detection approaches emerged in 1990s @cite_24 . Some of the modules are still widely used, such as sliding window, multi-scale testing and the CNN based classifier to distinguish faces from background. @cite_7 shows that CNN achieves good performance for frontal face detection and @cite_42 further extends it for rotation invariant face detection by training faces of different poses. Despite their good performance, they are too slow when considering the hardware of early years.
{ "cite_N": [ "@cite_24", "@cite_42", "@cite_7" ], "mid": [ "2673062467", "2154376992", "2217896605" ], "abstract": [ "", "In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; a \"router\" network first processes each input window to determine its orientation and then uses this information to prepare the window for one or more \"detector\" networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces rotated out of the image plane, such as profiles and semi-profiles.", "We present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present a straightforward procedure for aligning positive face examples for training. To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy. Comparisons with several other state-of-the-art face detection systems are presented, showing that our system has comparable performance in terms of detection and false-positive rates." ] }
1706.09876
2732082028
Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detector (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD.
The HOG based methods are firstly used in pedestrian or general object detection, such as the famous HOG @cite_33 and deformable part model @cite_11 . These methods achieves better performance than Viola-Jones based methods on standard benchmarks such as AFW @cite_41 and FDDB @cite_29 , and progressively become more efficient, including @cite_41 @cite_37 @cite_9 @cite_25 .
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_41", "@cite_29", "@cite_9", "@cite_25", "@cite_11" ], "mid": [ "", "2161969291", "2047508432", "182571476", "2056025798", "799206314", "2168356304" ], "abstract": [ "", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "Despite the maturity of face detection research, it remains difficult to compare different algorithms for face detection. This is partly due to the lack of common evaluation schemes. Also, existing data sets for evaluating face detection algorithms do not capture some aspects of face appearances that are manifested in real-world scenarios. In this work, we address both of these issues. We present a new data set of face images with more faces and more accurate annotations for face regions than in previous data sets. We also propose two rigorous and precise methods for evaluating the performance of face detection algorithms. We report results of several standard algorithms on the new benchmark.", "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.", "The presence of occluders significantly impacts object recognition accuracy. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and landmark localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for landmark localization and detection including challenging new data sets featuring significant occlusion. We find that the addition of an explicit occlusion model yields a detection system that outperforms existing approaches for occluded instances while maintaining competitive accuracy in detection and landmark localization for unoccluded instances.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1706.09650
2733068683
This paper presents a co-salient object detection method to find common salient regions in a set of images. We utilize deep saliency networks to transfer co-saliency prior knowledge and better capture high-level semantic information. The resulting initial co-saliency maps are enhanced by seed propagation steps over an integrated graph. The deep saliency networks are trained in a supervised manner to avoid weakly supervised online learning and exploit them not only to extract high-level features but also to produce both intra- and inter-image saliency maps. Through a refinement step, the initial co-saliency maps can uniformly highlight co-salient regions and locate accurate object boundaries. To handle input image groups inconsistent in size, we propose to pool multi-regional descriptors including both within-segment and within-group information. In addition, the integrated multilayer graph is constructed to find the regions that the previous steps may not detect by seed propagation with low-level descriptors. In this paper, we utilize the useful complementary components of high- and low-level information and several learning-based steps. Our experiments have demonstrated that the proposed approach outperforms comparable co-saliency detection methods on widely used public databases and can also be directly applied to co-segmentation tasks.
The co-salient object detection began with analyzing multi-image information and finding common objects within image pairs @cite_24 @cite_34 @cite_18 @cite_9 . For example, @cite_24 performed the pyramid decomposition of images and then extracted color and texture features from each region to compute the maximum SimRank scores of region pairs, which are defined as multi-image saliency values. To obtain the final co-saliency maps, they linearly combined the single- and multi-image saliency maps. @cite_34 proposed to calculate the affinities of superpixel pairs with color and position similarities, and then perform bipartite graph matching to discover the most relevant pairs for affinity propagation. The resulting superpixel affinities between two images are converted into foreground cohesiveness and locality compactness measures to obtain the final co-saliency maps.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_18", "@cite_34" ], "mid": [ "2112553126", "2069241582", "2007685793", "1969733239" ], "abstract": [ "In this paper, we introduce a method to detect co-saliency from an image pair that may have some objects in common. The co-saliency is modeled as a linear combination of the single-image saliency map (SISM) and the multi-image saliency map (MISM). The first term is designed to describe the local attention, which is computed by using three saliency detection techniques available in literature. To compute the MISM, a co-multilayer graph is constructed by dividing the image pair into a spatial pyramid representation. Each node in the graph is described by two types of visual descriptors, which are extracted from a representation of some aspects of local appearance, e.g., color and texture properties. In order to evaluate the similarity between two nodes, we employ a normalized single-pair SimRank algorithm to compute the similarity score. Experimental evaluation on a number of image pairs demonstrates the good performance of the proposed method on the co-saliency detection task.", "This paper presents a new algorithm to solve the problem of co-saliency detection, i.e., to find the common salient objects that are present in both of a pair of input images. Unlike most previous approaches, which require correspondence matching, we seek to solve the problem of co-saliency detection under a preattentive scheme. Our algorithm does not need to perform the correspondence matching between the two input images, and is able to achieve co-saliency detection before the focused attention occurs. The joint information provided by the image pair enables our algorithm to inhibit the responses of other salient objects that appear in just one of the images. Through experiments we show that our algorithm is effective in localizing the co-salient objects inside input image pairs.", "Image triage is a common task in digital photography. Determining which photos are worth processing for sharing with friends and family and which should be deleted to make room for new ones can be a challenge, especially on a device with a small screen like a mobile phone or camera. In this work we explore the importance of local structure changes?e.g. human pose, appearance changes, object orientation, etc.?to the photographic triage task. We perform a user study in which subjects are asked to mark regions of image pairs most useful in making triage decisions. From this data, we train a model for image saliency in the context of other images that we call cosaliency. This allows us to create collection-aware crops that can augment the information provided by existing thumbnailing techniques for the image triage task.", "Image co-saliency detection is a valuable technique to highlight perceptually salient regions in image pairs. In this paper, we propose a self-contained co-saliency detection algorithm based on superpixel affinity matrix. We first compute both intra and inter similarities of superpixels of image pairs. Bipartite graph matching is applied to determine most reliable inter similarities. To update the similarity score between every two superpixels, we next employ a GPU-based all-pair SimRank algorithm to do propagation on the affinity matrix. Based on the inter superpixel affinities we derive a co-saliency measure that evaluates the foreground cohesiveness and locality compactness of superpixels within one image. The effectiveness of our method is demonstrated in experimental evaluation." ] }
1706.09364
2724418412
We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7 .
The current best result on DAVIS is obtained by LucidTracker from Khoreva al @cite_49 , which extends MaskTrack by an elaborate data augmentation method, which creates a large number of training examples from the first annotated frames and reduces the dependence on large datasets for pretraining. Our experiments show that our approach achieves better performance using only conventional data augmentation methods.
{ "cite_N": [ "@cite_49" ], "mid": [ "1522301498" ], "abstract": [ "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm." ] }
1706.09364
2724418412
We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7 .
Recently, Wu al @cite_30 introduced a ResNet variant with fewer but wider layers than the original ResNet architectures @cite_41 and a simple approach for segmentation, which avoids some of the subsampling steps by replacing them by dilated convolutions @cite_1 and which does not use any skip connections. Despite the simplicity of their architecture for segmentation, they obtained outstanding results across multiple classification and semantic segmentation datasets, which motivates us to adopt their architecture.
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_1" ], "mid": [ "2952147788", "2949650786", "2286929393" ], "abstract": [ "The trend towards increasingly deep neural networks has been driven by a general observation that increasing depth increases the performance of a network. Recently, however, evidence has been amassing that simply increasing depth may not be the best way to increase performance, particularly given other limitations. Investigations into deep residual networks have also suggested that they may not in fact be operating as a single deep network, but rather as an ensemble of many relatively shallow networks. We examine these issues, and in doing so arrive at a new interpretation of the unravelled view of deep residual networks which explains some of the behaviours that have been observed experimentally. As a result, we are able to derive a new, shallower, architecture of residual networks which significantly outperforms much deeper models such as ResNet-200 on the ImageNet classification dataset. We also show that this performance is transferable to other problem domains by developing a semantic segmentation approach which outperforms the state-of-the-art by a remarkable margin on datasets including PASCAL VOC, PASCAL Context, and Cityscapes. The architecture that we propose thus outperforms its comparators, including very deep ResNets, and yet is more efficient in memory use and sometimes also in training time. The code and models are available at this https URL", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy." ] }
1706.09549
2733819076
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose two such distributional adversaries that operate and predict on samples, and show how they can be easily implemented on top of existing models. Various experimental results show that generators trained with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with pointwise prediction discriminators. The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.
The distributional adversary bears close resemblance to two-sample tests @cite_25 , where the model takes two samples drawn from potentially distinct distributions as input and produces a discrepancy value quantifying how different two distributions are. A popular kernel-based variant is the (MMD) @cite_0 @cite_15 @cite_29 : where @math is some feature mapping, and @math is the corresponding kernel function. An identity function for @math corresponds to computing the distance of the sample means. More complex kernels result in distances of higher-order statistics between two samples.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_29", "@cite_25" ], "mid": [ "1638081485", "", "1946137962", "2045638068" ], "abstract": [ "We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.", "", "We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in two-sample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation.", "The General Decision Problem.- The Probability Background.- Uniformly Most Powerful Tests.- Unbiasedness: Theory and First Applications.- Unbiasedness: Applications to Normal Distributions.- Invariance.- Linear Hypotheses.- The Minimax Principle.- Multiple Testing and Simultaneous Inference.- Conditional Inference.- Basic Large Sample Theory.- Quadratic Mean Differentiable Families.- Large Sample Optimality.- Testing Goodness of Fit.- General Large Sample Methods." ] }
1706.09549
2733819076
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose two such distributional adversaries that operate and predict on samples, and show how they can be easily implemented on top of existing models. Various experimental results show that generators trained with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with pointwise prediction discriminators. The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.
Another relevant line of work involves training generative models by looking at statistics at minibatch level, known as , initially developed in @cite_28 to stabilize GAN training. Batch normalization @cite_27 can be seen as performing some form of minibatch discrimination and it has been shown helpful for GAN training @cite_22 . proposed a repelling regularizer that operates on a minibatch and orthogonalizes the pairwise sample representation, keeping the model from concentrating on only a few modes.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_22" ], "mid": [ "2432004435", "2949117887", "2173520492" ], "abstract": [ "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ] }
1706.09549
2733819076
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose two such distributional adversaries that operate and predict on samples, and show how they can be easily implemented on top of existing models. Various experimental results show that generators trained with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with pointwise prediction discriminators. The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.
The problem of dealing with set inputs is being studied very recently. consider dealing with unordered variable-length inputs with content attention mechanism. In later work of @cite_11 @cite_12 , the authors consider the way of first embedding all samples into a fixed-dimensional latent space, and then adding them up to form a vector of the same dimension, which is further fed into another neural network. @cite_21 , the authors proposed a similar network for embedding a set of images into a latent space. They use a weighted summation over latent vectors, where the weights are learned through another network. The structure of our deep mean encoder resembles these networks in that it is permutation-invariant, but differs in its motivation---mean discrepancy measures---as well as its usage within discriminators in adversarial training settings.
{ "cite_N": [ "@cite_21", "@cite_12", "@cite_11" ], "mid": [ "2606783869", "", "2578436806" ], "abstract": [ "This paper targets on the problem of set to set recognition, which learns the metric between two image sets. Images in each set belong to the same identity. Since images in a set can be complementary, they hopefully lead to higher accuracy in practical applications. However, the quality of each sample cannot be guaranteed, and samples with poor quality will hurt the metric. In this paper, the quality aware network (QAN) is proposed to confront this problem, where the quality of each sample can be automatically learned although such information is not explicitly provided in the training stage. The network has two branches, where the first branch extracts appearance feature embedding for each sample and the other branch predicts quality score for each sample. Features and quality scores of all samples in a set are then aggregated to generate the final feature embedding. We show that the two branches can be trained in an end-to-end manner given only the set-level identity annotation. Analysis on gradient spread of this mechanism indicates that the quality learned by the network is beneficial to set-to-set recognition and simplifies the distribution that the network needs to fit. Experiments on both face verification and person re-identification show advantages of the proposed QAN. The source code and network structure can be downloaded at this https URL", "", "We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information." ] }
1706.09549
2733819076
We propose a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose two such distributional adversaries that operate and predict on samples, and show how they can be easily implemented on top of existing models. Various experimental results show that generators trained with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with pointwise prediction discriminators. The application of our framework to domain adaptation also results in considerable improvement over recent state-of-the-art.
Extensive work has been devoted to resolving the instability and mode collapse problems in GANs. One common approach is to train with more complex network architecture or better-behaving objectives @cite_22 @cite_18 @cite_16 @cite_7 @cite_1 @cite_24 . Another common approach is to add more discriminators or generators @cite_26 @cite_9 in the hope that training signals from multiple sources could lead to more stable training and better coverage of modes.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_7", "@cite_9", "@cite_1", "@cite_24", "@cite_16" ], "mid": [ "2952010110", "", "2173520492", "2963865839", "2952533959", "2554506842", "", "2564591810" ], "abstract": [ "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.", "", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem.", "Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.", "", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions." ] }
1706.09367
2725660645
Machine Learning (ML) has been successfully applied to a wide range of domains and applications. One of the techniques behind most of these successful applications is Ensemble Learning (EL), the field of ML that gave birth to methods such as Random Forests or Boosting. The complexity of applying these techniques together with the market scarcity on ML experts, has created the need for systems that enable a fast and easy drop-in replacement for ML libraries. Automated machine learning (autoML) is the field of ML that attempts to answers these needs. Typically, these systems rely on optimization techniques such as bayesian optimization to lead the search for the best model. Our approach differs from these systems by making use of the most recent advances on met alearning and a learning to rank approach to learn from metadata. We propose autoBagging, an autoML system that automatically ranks 63 bagging workflows by exploiting past performance and dataset characterization. Results on 140 classification datasets from the OpenML platform show that autoBagging can yield better performance than the Average Rank method and achieve results that are not statistically different from an ideal model that systematically selects the best workflow for each dataset. For the purpose of reproducibility and generalizability, autoBagging is publicly available as an R package on CRAN.
This line of research was later on overpowered by the characterization of datasets through landmarkers , such as learning curves or pairwise meta-rules @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2145955992" ], "abstract": [ "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset." ] }
1706.09516
2798476254
This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit. Their combination leads to CatBoost outperforming other publicly available boosting implementations in terms of quality on a variety of datasets. Two critical algorithmic advances introduced in CatBoost are the implementation of ordered boosting, a permutation-driven alternative to the classic algorithm, and an innovative algorithm for processing categorical features. Both techniques were created to fight a prediction shift caused by a special kind of target leakage present in all currently existing implementations of gradient boosting algorithms. In this paper, we provide a detailed analysis of this problem and demonstrate that proposed algorithms solve it effectively, leading to excellent empirical results.
Thus, using TS as new numerical features seems to be the most efficient method of handling categorical features with minimum information loss. TS are widely-used, e.g., in the click prediction task (click-through rates) @cite_19 @cite_10 @cite_23 @cite_7 , where such categorical features as user, region, ad, publisher play a crucial role. We further focus on ways to calculate TS and leave one-hot encoding and gradient statistics out of the scope of the current paper. At the same time, we believe that the ordering principle proposed in this paper is also effective for gradient statistics.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_7", "@cite_23" ], "mid": [ "2102486516", "2076618162", "2610314927", "2616657226" ], "abstract": [ "We consider situations where training data is abundant and computing resources are comparatively scarce. We argue that suitably designed online learning algorithms asymptotically outperform any batch learning algorithm. Both theoretical and experimental evidences are presented.", "Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.", "Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses' revenue, even 0.1 of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9 AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training.", "We propose a general method called truncated gradient to induce sparsity in the weights of online-learning algorithms with convex loss functions. This method has several essential properties: (1) The degree of sparsity is continuous---a parameter controls the rate of sparsification from no sparsification to total sparsification. (2) The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online-learning guarantees. (3) The approach works well empirically. We apply the approach to several data sets and find for data sets with large numbers of features, substantial sparsity is discoverable." ] }
1706.09229
2727768258
The theory of quantifier-free bit-vectors (QF_BV) is of paramount importance in software verification. The standard approach for satisfiability checking reduces the bit-vector problem to a Boolean problem, leveraging the powerful SAT solving techniques and their conflict-driven clause learning (CDCL) mechanisms. Yet, this bit-level approach loses the structure of the initial bit-vector problem. We propose a conflict-driven, word-level, combinable constraints learning for the theory of quantifier-free bit-vectors. This work paves the way to truly word-level decision procedures for bit-vectors, taking full advantage of word-level propagations recently designed in CP and SMT communities.
There are two most closely related papers, mentioned in the introduction. First, Wang describes a solver @cite_4 that uses the same word-level propagations as we do but delegates bit-level learning to a SAT solver. Focusing on the learning scheme, which is the subject of our paper, the comparison between our work and this one is the same as with any bit-blasting learning approach.
{ "cite_N": [ "@cite_4" ], "mid": [ "2465314249" ], "abstract": [ "Reasoning with bit-vectors arises in a variety of applications in verification and cryptography. Michel and Van Hentenryck have proposed an interesting approach to bit-vector constraint propagation on the word level. Each of the operations except comparison are constant time, assuming the bit-vector fits in a machine word. In contrast, bit-vector SMT solvers usually solve bit-vector problems by bit-blasting, that is, mapping the resulting operations to conjunctive normal form clauses, and using SAT technology to solve them. This also means generating intermediate variables which can be an advantage, as these can be searched on and learnt about. Since each approach has advantages it is important to see whether we can benefit from these advantages by using a word-level propagation approach with learning. In this paper we describe an approach to bit-vector solving using word-level propagation with learning. We provide alternative word-level propagators to Michel and Van Hentenryck, and give the first empirical evaluation of their approach that we are aware of. We show that, with careful engineering, a word-level propagation based approach can compete with (or complement) bit-blasting." ] }
1706.09562
2724695717
We study how different frame annotations complement one another when learning continuous lexical semantics. We learn the representations from a tensorized skip-gram model that consistently encodes syntactic-semantic content better, with multiple 10 gains over baselines.
similarly explored the relationship between semantic frames and thematic proto-roles. They proposed using a Conditional Random Field @cite_19 to jointly and conditionally model SPR and SRL @. demonstrated slight improvements in jointly and conditionally predicting PropBank @cite_18 's semantic role labels and 's proto-role labels.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "2147880316", "2251161546" ], "abstract": [ "We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "This research describes SemLink, a comprehensive resource for Natural Language Processing that maps and unifies several highquality lexical resources: PropBank, VerbNet, FrameNet, and the recently added OntoNotes sense groupings. Each of these resources was created for slightly different purposes, and therefore each carries unique strengths and limitations. SemLink allows users to leverage the strengths of each resource and provides the groundwork for incorporating these lexical resources effectively into linked data resources. SemLink and the resources included therein are discussed with a focus on the value of using lexical resources in a complementary fashion. Recent improvements to SemLink, including the addition of a new resource, the OntoNotes sense groupings, are described. Work to address future goals, including further expansion of SemLink, is also discussed." ] }
1706.09529
2726717203
We propose a novel and flexible approach to meta-learning for learning-to-learn from only a few examples. Our framework is motivated by actor-critic reinforcement learning, but can be applied to both reinforcement and supervised learning. The key idea is to learn a meta-critic: an action-value function neural network that learns to criticise any actor trying to solve any specified task. For supervised learning, this corresponds to the novel idea of a trainable task-parametrised loss generator. This meta-critic approach provides a route to knowledge transfer that can flexibly deal with few-shot and semi-supervised conditions for both reinforcement and supervised learning. Promising results are shown on both reinforcement and supervised learning problems.
There is extensive work on meta-learning to enable few-shot supervised @cite_30 @cite_38 @cite_5 @cite_4 @cite_8 and reinforcement @cite_6 @cite_14 @cite_10 @cite_19 learning. Only a few studies provide frameworks that do not require specific model architectures and that address both settings. The most related to ours in terms of architecture agnostic few-shot learning in both SL and RL is @cite_16 . However, the methodologies are completely different: @cite_16 aims to achieve few-shot learning by learning a shared initialisation from which any individual specific task can easily be fine-tuned. As we will show later, this approach is weak to diversity among the tasks. When learning neural networks with SGD, such a single shared initialisation roughly corresponds to a shared prior representing the average' source task. The weights representing the average of previous tasks can be particularly bad when the distribution of tasks is multi-modal. Our meta-critic approach is more robust to such task diversity.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_14", "@cite_4", "@cite_8", "@cite_6", "@cite_19", "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "2432717477", "2753160622", "2097381042", "2194321275", "2435450765", "2173248099", "1486056878", "", "2951775809", "2741134157" ], "abstract": [ "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.", "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.", "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.", "One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.", "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.", "We study task sequences that allow for speeding up the learner‘s average reward intake through appropriate shifts of inductive bias (changes of the learner‘s policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the “success-story algorithm” (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner‘s policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails.", "", "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "" ] }
1706.09529
2726717203
We propose a novel and flexible approach to meta-learning for learning-to-learn from only a few examples. Our framework is motivated by actor-critic reinforcement learning, but can be applied to both reinforcement and supervised learning. The key idea is to learn a meta-critic: an action-value function neural network that learns to criticise any actor trying to solve any specified task. For supervised learning, this corresponds to the novel idea of a trainable task-parametrised loss generator. This meta-critic approach provides a route to knowledge transfer that can flexibly deal with few-shot and semi-supervised conditions for both reinforcement and supervised learning. Promising results are shown on both reinforcement and supervised learning problems.
Our approach is somewhat related to knowledge distillation @cite_7 @cite_37 -- the supervision of a student network by a teacher network. The meta-critic shares distillation's favourable ability to use unlabelled data to train the student actor. In contrast to regular distillation where teacher and student are both single-task actors, our teacher is a meta-critic shared across all tasks. Related applications of distillation include actor-mimic @cite_17 where multiple expert actor networks teach a single student actor how to solve multiple tasks. This is the opposite of our single teacher critic teaching multiple actor networks to solve different tasks (and thus providing knowledge transfer).
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_17" ], "mid": [ "1821462560", "", "2174786457" ], "abstract": [ "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "", "The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed \"Actor-Mimic\", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods." ] }
1706.09317
2950887446
A proper semantic representation for encoding side information is key to the success of zero-shot learning. In this paper, we explore two alternative semantic representations especially for zero-shot human action recognition: textual descriptions of human actions and deep features extracted from still images relevant to human actions. Such side information are accessible on Web with little cost, which paves a new way in gaining side information for large-scale zero-shot human action recognition. We investigate different encoding methods to generate semantic representations for human actions from such side information. Based on our zero-shot visual recognition method, we conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic representations . The results suggest that our proposed text- and image-based semantic representations outperform traditional attributes and word vectors considerably for zero-shot human action recognition. In particular, the image-based semantic representations yield the favourable performance even though the representation is extracted from a small number of images per class.
Attributes based semantic representations were firstly proposed for ZSL in @cite_1 , thereafter, attributes have been employed for ZSL in many works @cite_25 @cite_14 @cite_18 @cite_7 . A set of binary attributes need to be manually defined to represent the semantic properties of objects. As a result, each object class can be represented by a binary attribute vector in which the value of one and zero indicates the presence and absence of each attribute respectively. Since the attributes are shared by seen and unseen classes, the knowledge transfer is enabled. However, as mentioned above, the definition of attributes require experts with domain knowledge to discriminate different classes, and the attribute annotation for a large number of classes could be subjective and labour-intensive.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_1", "@cite_25" ], "mid": [ "2949823873", "2518962550", "2463762378", "2134270519", "2596142952" ], "abstract": [ "We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional ZSL that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of the classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this tradeoff. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet Full 2011 with 21,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper-bound on the performance limit of GZSL. There, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving zero-shot learning.", "We develop a novel method for zero shot learning (ZSL) based on test-time adaptation of similarity functions learned using training data. Existing methods exclusively employ source-domain side information for recognizing unseen classes during test time. We show that for batch-mode applications, accuracy can be significantly improved by adapting these predictors to the observed test-time target-domain ensemble. We develop a novel structured prediction method for maximum a posteriori (MAP) estimation, where parameters account for test-time domain shift from what is predicted primarily using source domain information. We propose a Gaussian parameterization for the MAP problem and derive an efficient structure prediction algorithm. Empirically we test our method on four popular benchmark image datasets for ZSL, and show significant improvement over the state-of-the-art, on average, by 11.50 and 30.12 in terms of accuracy for recognition and mean average precision (mAP) for retrieval, respectively.", "Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.", "Due to the importance of zero-shot learning, the number of proposed approaches has increased steadily recently. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss limitations of the current status of the area which can be taken as a basis for advancing it." ] }
1706.09317
2950887446
A proper semantic representation for encoding side information is key to the success of zero-shot learning. In this paper, we explore two alternative semantic representations especially for zero-shot human action recognition: textual descriptions of human actions and deep features extracted from still images relevant to human actions. Such side information are accessible on Web with little cost, which paves a new way in gaining side information for large-scale zero-shot human action recognition. We investigate different encoding methods to generate semantic representations for human actions from such side information. Based on our zero-shot visual recognition method, we conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic representations . The results suggest that our proposed text- and image-based semantic representations outperform traditional attributes and word vectors considerably for zero-shot human action recognition. In particular, the image-based semantic representations yield the favourable performance even though the representation is extracted from a small number of images per class.
Alternatively, attributes can be mined automatically from visual features by discriminative mid-level feature learning @cite_19 @cite_0 @cite_22 @cite_11 @cite_16 , but their semantic meanings are unknown, thus inappropriate for direct use in ZSL. To enhance the attributes' discriminative power and semantic meaningfulness, the manually defined attributes and the ones automatically learned from training data are usually combined. However, the data-driven attributes are usually dataset specific and probably fail on a different dataset.
{ "cite_N": [ "@cite_22", "@cite_0", "@cite_19", "@cite_16", "@cite_11" ], "mid": [ "1500937733", "2064851185", "2098411764", "", "2523479226" ], "abstract": [ "We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone.", "In this paper we explore the idea of using high-level semantic concepts, also called attributes, to represent human actions from videos and argue that attributes enable the construction of more descriptive models for human action recognition. We propose a unified framework wherein manually specified attributes are: i) selected in a discriminative fashion so as to account for intra-class variability; ii) coherently integrated with data-driven attributes to make the attribute set more descriptive. Data-driven attributes are automatically inferred from the training data using an information theoretic approach. Our framework is built upon a latent SVM formulation where latent variables capture the degree of importance of each attribute for each action class. We also demonstrate that our attribute-based action representation can be effectively used to design a recognition procedure for classifying novel action classes for which no training samples are available. We test our approach on several publicly available datasets and obtain promising results that quantitatively demonstrate our theoretical claims.", "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.", "", "In this letter, we propose a novel approach for learning semantics-driven attributes , which are discriminative for zero-shot visual recognition. Latent attributes are derived in a principled manner, aiming at maintaining class-level semantic relatedness and attribute-wise balancedness. Unlike existing methods that binarize learned real-valued attributes via a quantization stage, we directly learn the optimal binary attributes by effectively addressing a discrete optimization problem. Particularly, we propose a class-wise discrete descent algorithm, based on which latent attributes of each class are learned iteratively. Moreover, we propose to simultaneously predict multiple attributes from low-level features via multioutput neural networks (MONN), which can model intrinsic correlation among attributes and make prediction more tractable. Extensive experiments on two standard datasets clearly demonstrate the superiority of our method over the state-of-the-arts." ] }
1706.09317
2950887446
A proper semantic representation for encoding side information is key to the success of zero-shot learning. In this paper, we explore two alternative semantic representations especially for zero-shot human action recognition: textual descriptions of human actions and deep features extracted from still images relevant to human actions. Such side information are accessible on Web with little cost, which paves a new way in gaining side information for large-scale zero-shot human action recognition. We investigate different encoding methods to generate semantic representations for human actions from such side information. Based on our zero-shot visual recognition method, we conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic representations . The results suggest that our proposed text- and image-based semantic representations outperform traditional attributes and word vectors considerably for zero-shot human action recognition. In particular, the image-based semantic representations yield the favourable performance even though the representation is extracted from a small number of images per class.
To alleviate the semantic gap problem, some attempts have been made to enhance the word vectors @cite_12 @cite_17 @cite_2 @cite_15 . Inoue . @cite_12 aim to adapt the original word vectors to make two visually similar concepts close to each other in the adapted word vector space by representing a concept with a weighted sum of its original word vector and its hypernym (based on ) word vectors. And the weights are learned from visual resources. Alexiou . @cite_17 enrich the word vector representation by mining and considering synonyms of the action class labels from multiple . Mukherjee . @cite_2 use instead of a single word vector to model the class labels so that the intra-class variability can be expressed properly in the semantic representations. To address the issue of polysemy, Sandouk . @cite_15 learn a specific vector representation for a word together with its context. That is to say, the same word could have different vector representations when it is in different contexts. Inspired by these works, our work further investigates the possible side information and enabling techniques to enhance the word vectors for ZSL.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_12", "@cite_17" ], "mid": [ "2416900635", "2561940122", "2525183655", "2507611421" ], "abstract": [ "Zero Shot Learning (ZSL) enables a learning model to classify instances of an unseen class during training. While most research in ZSL focuses on single-label classification, few studies have been done in multi-label ZSL, where an instance is associated with a set of labels simultaneously, due to the difficulty in modeling complex semantics conveyed by a set of labels. In this paper, we propose a novel approach to multi-label ZSL via concept embedding learned from collections of public users' annotations of multimedia. Thanks to concept embedding, multi-label ZSL can be done by efficiently mapping an instance input features onto the concept embedding space in a similar manner used in single-label ZSL. Moreover, our semantic learning model is capable of embedding an out-of-vocabulary label by inferring its meaning from its co-occurring labels. Thus, our approach allows both seen and unseen labels during the concept embedding learning to be used in the aforementioned instance mapping, which makes multi-label ZSL more flexible and suitable for real applications. Experimental results of multi-label ZSL on images and music tracks suggest that our approach outperforms a state-of-the-art multi-label ZSL model and can deal with a scenario involving out-of-vocabulary labels without re-training the semantics learning model.", "", "We propose a framework of word-vector adaptation, which makes vectors of visually similar concepts close to each other. Here, word vectors are real-valued vector representation of words, e.g., word2vec representation. Our basic idea is to assume that each concept has some hypernyms that are important to determine its visual features. For example, for a concept Swallow with hypernyms Bird, Animal and Entity, we believe Bird is the most important since birds have common visual features with their feathers etc. Adapted word vectors are obtained for each word by taking a weighted sum of a given original word vector and its hypernym word vectors. Our weight optimization makes vectors of visually similar concepts close to each other, by giving a large weight for such important hypernyms. We apply the adapted word vectors to zero-shot learning on the TRECVID 2014 semantic indexing dataset. We achieved 0.083 of Mean Average Precision, which is the best performance without using TRECVID training data to the best of our knowledge.", "Zero shot learning (ZSL) provides a solution to recognising unseen classes without class labelled data for model learning. Most ZSL methods aim to learn a mapping from a visual feature space to a semantic embedding space, e.g. attribute or word vector spaces. The use of word vector space is particularly attractive as compared to attribute, it offers vast auxiliary classes with free parts embedding without human annotation. However, using the word vector embedding often provides weaker discriminative power than manually labelled attributes of the auxiliary classes. This is compounded further in zero-shot action recognition due to richer content variations among action classes. In this work we propose to explore a broader semantic contextual information in the text domain to enrich the word vector representation of action classes. We show through extensive experiments that this method improves significantly the performance of a number of existing word vector embedding ZSL methods. Moreover, it also outperforms attribute embedding ZSL with human annotation." ] }
1706.09278
2951581365
Learning relations based on evidence from knowledge bases relies on processing the available relation instances. Many relations, however, have clear domain and range, which we hypothesize could help learn a better, more generalizing, model. We include such information in the RESCAL model in the form of a regularization factor added to the loss function that takes into account the types (categories) of the entities that appear as arguments to relations in the knowledge base. We note increased performance compared to the baseline model in terms of mean reciprocal rank and hits@N, N = 1, 3, 10. Furthermore, we discover scenarios that significantly impact the effectiveness of the type regularizer.
A variety of latent factor models @cite_4 @cite_9 @cite_6 @cite_10 have been developed to represent entities and relations in a knowledge graph, and have been used to address the knowledge base completion (KBC) problem. Most latent factor models are trained on either knowledge graph triples, or triples extracted from open domain knowledge extraction tools @cite_10 . A notable exception is the RNN model proposed by @cite_1 that learns path embeddings for knowledge base completion. @cite_15 propose a compositional objective function over latent factor models, which is trained on paths as well as triples. For models that are compositional, @cite_5 shows that incorporating intermediate entity information, in the form of latent factors, improves KBC performance. The source and target types are not explicitly included.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2099752825", "", "2952155763", "", "2460319482", "1934264538", "1852412531" ], "abstract": [ "Vast amounts of structured information have been published in the Semantic Web's Linked Open Data (LOD) cloud and their size is still growing rapidly. Yet, access to this information via reasoning and querying is sometimes difficult, due to LOD's size, partial data inconsistencies and inherent noisiness. Machine Learning offers an alternative approach to exploiting LOD's data with the advantages that Machine Learning algorithms are typically robust to both noise and data inconsistencies and are able to efficiently utilize non-deterministic dependencies in the data. From a Machine Learning point of view, LOD is challenging due to its relational nature and its scale. Here, we present an efficient approach to relational learning on LOD data, based on the factorization of a sparse tensor that scales to data consisting of millions of entities, hundreds of relations and billions of known facts. Furthermore, we show how ontological knowledge can be incorporated in the factorization to improve learning results and how computation can be distributed across multiple nodes. We demonstrate that our approach is able to factorize the YAGO 2 core ontology and globally predict statements for this large knowledge base using a single dual-core desktop computer. Furthermore, we show experimentally that our approach achieves good results in several relational learning tasks that are relevant to Linked Data. Once a factorization has been computed, our model is able to predict efficiently, and without any additional training, the likelihood of any of the 4.3 ⋅ 1014 possible triples in the YAGO 2 core ontology.", "", "Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recursive neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11 , and a method leveraging pre-trained embeddings by 7 .", "", "Modeling relation paths has offered significant gains in embedding models for knowledge base (KB) completion. However, enumerating paths between two entities is very expensive, and existing approaches typically resort to approximation with a sampled subset. This problem is particularly acute when text is jointly modeled with KB relations and used to provide direct evidence for facts mentioned in it. In this paper, we propose the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length, while modeling both relation types and intermediate nodes in the compositional path representations. We conduct a theoretical analysis of the efficiency gain from the approach. Experiments on two datasets show that it addresses representational limitations in prior approaches and improves accuracy in KB completion.", "Path queries on a knowledge graph can be used to answer compositional questions such as \"What languages are spoken by people living in Lisbon?\". However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new \"compositional\" training objective, which dramatically improves all models' ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43 ) and achieving new state-of-the-art results.", "© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision." ] }
1706.09278
2951581365
Learning relations based on evidence from knowledge bases relies on processing the available relation instances. Many relations, however, have clear domain and range, which we hypothesize could help learn a better, more generalizing, model. We include such information in the RESCAL model in the form of a regularization factor added to the loss function that takes into account the types (categories) of the entities that appear as arguments to relations in the knowledge base. We note increased performance compared to the baseline model in terms of mean reciprocal rank and hits@N, N = 1, 3, 10. Furthermore, we discover scenarios that significantly impact the effectiveness of the type regularizer.
@cite_11 make use the of type information and produce a variation of Rescal they call Trescal -- Typed Rescal . The type information is used to improve the efficiency of the model, by reducing the size of the entity matrix in the computation of the loss function to entities belonging to the domain and range of the relation. The entity type as such is only implicitly incorporated, as something shared by the entities singled out for computing the loss function.
{ "cite_N": [ "@cite_11" ], "mid": [ "2102363952" ], "abstract": [ "While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-theart method by 10 points when used as a subcomponent." ] }
1706.09278
2951581365
Learning relations based on evidence from knowledge bases relies on processing the available relation instances. Many relations, however, have clear domain and range, which we hypothesize could help learn a better, more generalizing, model. We include such information in the RESCAL model in the form of a regularization factor added to the loss function that takes into account the types (categories) of the entities that appear as arguments to relations in the knowledge base. We note increased performance compared to the baseline model in terms of mean reciprocal rank and hits@N, N = 1, 3, 10. Furthermore, we discover scenarios that significantly impact the effectiveness of the type regularizer.
@cite_3 builds on @cite_1 , and uses an RNN to model paths which incorporate type information for the entities along the path. Entities are represented as a sum of their entity types, which are learned during training. Including this information leads to higher performance.
{ "cite_N": [ "@cite_1", "@cite_3" ], "mid": [ "2952155763", "2466714650" ], "abstract": [ "Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recursive neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11 , and a method leveraging pre-trained embeddings by 7 .", "Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). (2015) use RNNs to compose the distributed semantics of multi-hop paths in KBs; however for multiple reasons, the approach lacks accuracy and practicality. This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, entities, and entity-types; (2) we use neural attention modeling to incorporate multiple paths; (3) we learn to share strength in a single RNN that represents logical composition across all relations. On a largescale Freebase+ClueWeb prediction task, we achieve 25 error reduction, and a 53 error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84 versus previous state-of-the-art. The code and data are available at this https URL" ] }