aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1710.02173 | 2763925030 | While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions. | Earlier work also proposes tools that make it possible to incorporate user feedback into clustering formation. Matchmaker @cite_38 builds on techniques from @cite_44 with the ability to modify clusterings by grouping data dimensions. ClusterSculptor @cite_25 and Cluster Sculptor @cite_21 , two different tools, enable users to supervise clustering processes in various clustering methods. Schreck al @cite_19 propose using user feedback to bootstrap the similarity evaluation in data space (trajectories, in this case) and then apply the clustering algorithm. | {
"cite_N": [
"@cite_38",
"@cite_21",
"@cite_44",
"@cite_19",
"@cite_25"
],
"mid": [
"2171575586",
"1978998933",
"2062937620",
"1990086701",
"2164223342"
],
"abstract": [
"When analyzing multidimensional, quantitative data, the comparison of two or more groups of dimensions is a common task. Typical sources of such data are experiments in biology, physics or engineering, which are conducted in different configurations and use replicates to ensure statistically significant results. One common way to analyze this data is to filter it using statistical methods and then run clustering algorithms to group similar values. The clustering results can be visualized using heat maps, which show differences between groups as changes in color. However, in cases where groups of dimensions have an a priori meaning, it is not desirable to cluster all dimensions combined, since a clustering algorithm can fragment continuous blocks of records. Furthermore, identifying relevant elements in heat maps becomes more difficult as the number of dimensions increases. To aid in such situations, we have developed Matchmaker, a visualization technique that allows researchers to arbitrarily arrange and compare multiple groups of dimensions at the same time. We create separate groups of dimensions which can be clustered individually, and place them in an arrangement of heat maps reminiscent of parallel coordinates. To identify relations, we render bundled curves and ribbons between related records in different groups. We then allow interactive drill-downs using enlarged detail views of the data, which enable in-depth comparisons of clusters between groups. To reduce visual clutter, we minimize crossings between the views. This paper concludes with two case studies. The first demonstrates the value of our technique for the comparison of clustering algorithms. In the second, biologists use our system to investigate why certain strains of mice develop liver disease while others remain healthy, informally showing the efficacy of our system when analyzing multidimensional data containing distinct groups of dimensions.",
"Abstract This paper describes Cluster Sculptor, a novel interactive clustering system that allows a user to iteratively update the cluster labels of a data set, and an associated low-dimensional projection. The system is fed by clustering results computed in a high-dimensional space, and uses a two-dimensional (2D) projection, both as support for overlaying the cluster labels, and engaging user interaction. By easily interacting with elements directly in the visualization, the user can inject his or her domain knowledge progressively. Via interactive controls, the distribution of the data in the 2D space can be used to amend the cluster labels. Reciprocally, the 2D projection can be updated so as to emphasize the current clusters. The 2D projection updates follow a smooth physical metaphor that gives insight of the process to the user. Updates can be interrupted any time, for further data inspection, or modifying the input preferences. The interest of the system is demonstrated by detailed experimental scenarios on three real data sets.",
"To date, work in microarrays, sequenced genomes and bioinformatics has focused largely on algorithmic methods for processing and manipulating vast biological data sets. Future improvements will likely provide users with guidance in selecting the most appropriate algorithms and metrics for identifying meaningful clusters-interesting patterns in large data sets, such as groups of genes with similar profiles. Hierarchical clustering has been shown to be effective in microarray data analysis for identifying genes with similar profiles and thus possibly with similar functions. Users also need an efficient visualization tool, however, to facilitate pattern extraction from microarray data sets. The Hierarchical Clustering Explorer integrates four interactive features to provide information visualization techniques that allow users to control the processes and interact with the results. Thus, hybrid approaches that combine powerful algorithms with interactive visualization tools will join the strengths of fast processors with the detailed understanding of domain experts.",
"Visual-interactive cluster analysis provides valuable tools for effectively analyzing large and complex data sets. Due to desirable properties and an inherent predisposition for visualization, the Kohonen Feature Map (or self-organizing map, or SOM) algorithm is among the most popular and widely used visual clustering techniques. However, the unsupervised nature of the algorithm may be disadvantageous in certain applications. Depending on initialization and data characteristics, cluster maps (cluster layouts) may emerge that do not comply with user preferences, expectations, or the application context. Considering SOM-based analysis of trajectory data, we propose a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm. The framework implements the general Visual Analytics idea to effectively combine automatic data analysis with human expert supervision. It provides simple, yet effective facilities for visually monitoring and interactively controlling the trajectory clustering process at arbitrary levels of detail. The approach allows the user to leverage existing domain knowledge and user preferences, arriving at improved cluster maps. We apply the framework on a trajectory clustering problem, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.",
"Cluster analysis (CA) is a powerful strategy for the exploration of high-dimensional data in the absence of a-priori hypotheses or data classification models, and the results of CA can then be used to form such models. But even though formal models and classification rules may not exist in these data exploration scenarios, domain scientists and experts generally have a vast amount of non-compiled knowledge and intuition that they can bring to bear in this effort. In CA, there are various popular mechanisms to generate the clusters, however, the results from their non- supervised deployment rarely fully agree with this expert knowledge and intuition. To this end, our paper describes a comprehensive and intuitive framework to aid scientists in the derivation of classification hierarchies in CA, using k-means as the overall clustering engine, but allowing them to tune its parameters interactively based on a non-distorted compact visual presentation of the inherent characteristics of the data in high- dimensional space. These include cluster geometry, composition, spatial relations to neighbors, and others. In essence, we provide all the tools necessary for a high-dimensional activity we call cluster sculpting, and the evolving hierarchy can then be viewed in a space-efficient radial dendrogram. We demonstrate our system in the context of the mining and classification of a large collection of millions of data items of aerosol mass spectra, but our framework readily applies to any high-dimensional CA scenario."
]
} |
1710.02173 | 2763925030 | While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions. | Prior work has also introduced techniques for comparing clustering results of different datasets or different algorithms @cite_33 @cite_12 @cite_14 @cite_44 . DICON @cite_33 encodes statistical properties of clustering instances as icons and embeds them in the plane based on similarity using multidimensional scaling. Pilhofer al @cite_14 propose a method for reordering categorical variables to align with each other and thus augment the visual comparison of clusterings. The recent tool XCluSim @cite_12 supports comparison of several clustering results of gene expression datasets using an approach similar to that of the Hierarchical Clustering Explorer. | {
"cite_N": [
"@cite_44",
"@cite_14",
"@cite_33",
"@cite_12"
],
"mid": [
"2062937620",
"2013172302",
"2157530472",
"2143867397"
],
"abstract": [
"To date, work in microarrays, sequenced genomes and bioinformatics has focused largely on algorithmic methods for processing and manipulating vast biological data sets. Future improvements will likely provide users with guidance in selecting the most appropriate algorithms and metrics for identifying meaningful clusters-interesting patterns in large data sets, such as groups of genes with similar profiles. Hierarchical clustering has been shown to be effective in microarray data analysis for identifying genes with similar profiles and thus possibly with similar functions. Users also need an efficient visualization tool, however, to facilitate pattern extraction from microarray data sets. The Hierarchical Clustering Explorer integrates four interactive features to provide information visualization techniques that allow users to control the processes and interact with the results. Thus, hybrid approaches that combine powerful algorithms with interactive visualization tools will join the strengths of fast processors with the detailed understanding of domain experts.",
"Classifying a set of objects into clusters can be done in numerous ways, producing different results. They can be visually compared using contingency tables [27], mosaicplots [13], fluctuation diagrams [15], tableplots [20] , (modified) parallel coordinates plots [28], Parallel Sets plots [18] or circos diagrams [19]. Unfortunately the interpretability of all these graphical displays decreases rapidly with the numbers of categories and clusterings. In his famous book A Semiology of Graphics [5] Bertin writes “the discovery of an ordered concept appears as the ultimate point in logical simplification since it permits reducing to a single instant the assimilation of series which previously required many instants of study”. Or in more everyday language, if you use good orderings you can see results immediately that with other orderings might take a lot of effort. This is also related to the idea of effect ordering [12], that data should be organised to reflect the effect you want to observe. This paper presents an efficient algorithm based on Bertin's idea and concepts related to Kendall's t [17], which finds informative joint orders for two or more nominal classification variables. We also show how these orderings improve the various displays and how groups of corresponding categories can be detected using a top-down partitioning algorithm. Different clusterings based on data on the environmental performance of cars sold in Germany are used for illustration. All presented methods are available in the R package extracat which is used to compute the optimized orderings for the example dataset.",
"Clustering as a fundamental data analysis technique has been widely used in many analytic applications. However, it is often difficult for users to understand and evaluate multidimensional clustering results, especially the quality of clusters and their semantics. For large and complex data, high-level statistical information about the clusters is often needed for users to evaluate cluster quality while a detailed display of multidimensional attributes of the data is necessary to understand the meaning of clusters. In this paper, we introduce DICON, an icon-based cluster visualization that embeds statistical information into a multi-attribute display to facilitate cluster interpretation, evaluation, and comparison. We design a treemap-like icon to represent a multidimensional cluster, and the quality of the cluster can be conveniently evaluated with the embedded statistical information. We further develop a novel layout algorithm which can generate similar icons for similar clusters, making comparisons of clusters easier. User interaction and clutter reduction are integrated into the system to help users more effectively analyze and refine clustering results for large datasets. We demonstrate the power of DICON through a user study and a case study in the healthcare domain. Our evaluation shows the benefits of the technique, especially in support of complex multidimensional cluster analysis.",
"Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious."
]
} |
1710.02173 | 2763925030 | While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions. | In certain cases, expert users have prior knowledge of how the projections should look. To enable user input to guide dimensionality reduction, earlier research has proposed several techniques @cite_11 @cite_20 @cite_32 @cite_13 @cite_10 @cite_26 . Enabling users to adjust the projection positions or the weights of data dimensions and distances is a common approach in earlier research for incorporating user feedback to projection computations. For example, X GGvis @cite_11 supports changing the weights of dissimilarities input to the MDS stress function along the with the coordinates (configuration) of the embedded points to guide the projection process. Similarly, iPCA @cite_18 enables users to interactively modify the weights of data dimensions in computing projections. Endert al @cite_0 apply similar ideas to an additional set of dimensionality-reduction methods while incorporating user feedback through spatial interactions. The spatial interactions, and , that we introduce here are developed for dynamically reasoning about dimensionality-reduction methods and the underlying data, not for incorporating user feedback. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_26",
"@cite_32",
"@cite_0",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2025394193",
"2103994377",
"2070626686",
"2037133710",
"",
"2051088039",
"2125914984"
],
"abstract": [
"",
"Principle Component Analysis (PCA) is a widely used mathematical technique in many fields for factor and trend analysis, dimension reduction, etc. However, it is often considered to be a \"black box\" operation whose results are difficult to interpret and sometimes counter-intuitive to the user. In order to assist the user in better understanding and utilizing PCA, we have developed a system that visualizes the results of principal component analysis using multiple coordinated views and a rich set of user interactions. Our design philosophy is to support analysis of multivariate datasets through extensive interaction with the PCA output. To demonstrate the usefulness of our system, we performed a comparative user study with a known commercial system, SAS INSIGHT's Interactive Data Exploration. Participants in our study solved a number of high-level analysis tasks with each interface and rated the systems on ease of learning and usefulness. Based on the participants' accuracy, speed, and qualitative feedback, we observe that our system helps users to better understand relationships between the data and the calculated eigenspace, which allows the participants to more accurately analyze the data. User feedback suggests that the interactivity and transparency of our system are the key strengths of our approach.",
"Current implementations of multidimensional scaling (MDS), an approach that attempts to best represent data point similarity in a low-dimensional representation, are not suited for many of today's large-scale datasets. We propose an extension to the spring model approach that allows the user to interactively explore datasets that are far beyond the scale of previous implementations of MDS. We present MDSteer, a steerable MDS computation engine and visualization tool that progressively computes an MDS layout and handles datasets of over one million points. Our technique employs hierarchical data structures and progressive layouts to allow the user to steer the computation of the algorithm to the interesting areas of the dataset. The algorithm iteratively alternates between a layout stage in which a subselection of points are added to the set of active points affected by the MDS iteration, and a binning stage which increases the depth of the bin hierarchy and organizes the currently unplaced points into separate spatial regions. This binning strategy allows the user to select onscreen regions of the layout to focus the MDS computation into the areas of the dataset that are assigned to the selected bins. We show both real and common synthetic benchmark datasets with dimensionalities ranging from 3 to 300 and cardinalities of over one million points",
"This paper introduces an approach to exploration and discovery in high-dimensional data that incorporates a user's knowledge and questions to craft sets of projection functions meaningful to them. Unlike most prior work that defines projections based on their statistical properties, our approach creates projection functions that align with user-specified annotations. Therefore, the resulting derived dimensions represent concepts defined by the user's examples. These especially crafted projection functions, or explainers, can help find and explain relationships between the data variables and user-designated concepts. They can organize the data according to these concepts. Sets of explainers can provide multiple perspectives on the data. Our approach considers tradeoffs in choosing these projection functions, including their simplicity, expressive power, alignment with prior knowledge, and diversity. We provide techniques for creating collections of explainers. The methods, based on machine learning optimization frameworks, allow exploring the tradeoffs. We demonstrate our approach on model problems and applications in text analysis.",
"In visual analytics, sensemaking is facilitated through interactive visual exploration of data. Throughout this dynamic process, users combine their domain knowledge with the dataset to create insight. Therefore, visual analytic tools exist that aid sensemaking by providing various interaction techniques that focus on allowing users to change the visual representation through adjusting parameters of the underlying statistical model. However, we postulate that the process of sensemaking is not focused on a series of parameter adjustments, but instead, a series of perceived connections and patterns within the data. Thus, how can models for visual analytic tools be designed, so that users can express their reasoning on observations (the data), instead of directly on the model or tunable parameters? Observation level (and thus “observation”) in this paper refers to the data points within a visualization. In this paper, we explore two possible observation-level interactions, namely exploratory and expressive, within the context of three statistical methods, Probabilistic Principal Component Analysis (PPCA), Multidimensional Scaling (MDS), and Generative Topographic Mapping (GTM). We discuss the importance of these two types of observation level interactions, in terms of how they occur within the sensemaking process. Further, we present use cases for GTM, MDS, and PPCA, illustrating how observation level interaction can be incorporated into visual analytic tools.",
"",
"Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.",
"We discuss methodology for multidimensional scaling (MDS) and its implementation in two software systems, GGvis and XGvis. MDS is a visualization technique for proximity data, that is, data in the form of N × N dissimilarity matrices. MDS constructs maps (“configurations,” “embeddings”) in IRk by interpreting the dissimilarities as distances. Two frequent sources of dissimilarities are high-dimensional data and graphs. When the dissimilarities are distances between high-dimensional objects, MDS acts as a (often nonlinear) dimension-reduction technique. When the dissimilarities are shortest-path distances in a graph, MDS acts as a graph layout technique. MDS has found recent attention in machine learning motivated by image databases (“Isomap”). MDS is also of interest in view of the popularity of “kernelizing” approaches inspired by Support Vector Machines (SVMs; “kernel PCA”).This article discusses the following general topics: (1) the stability and multiplicity of MDS solutions; (2) the analysis of struc..."
]
} |
1710.02196 | 2763374915 | Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. | To explain the success of neural networks, some references study their ability to approximate smooth functions @cite_26 @cite_5 @cite_29 @cite_12 @cite_15 @cite_27 @cite_39 , while some other references focus on benefits of having more layers @cite_35 @cite_22 . Over-parameterized networks where the number of parameters are larger than the number of training samples have been studied in @cite_13 @cite_17 . However, such architectures can cause generalization issues in practice @cite_28 . | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_29",
"@cite_39",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2281746805",
"2166116275",
"2739309726",
"2950220847",
"2952250352",
"",
"2139055047",
"2051579606",
"2592091545",
"2963417959",
"2949940189",
"2608609325"
],
"abstract": [
"For any positive integer @math , there exist neural networks with @math layers, @math nodes per layer, and @math distinct parameters which can not be approximated by networks with @math layers unless they are exponentially large --- they must possess @math nodes. This result is proved here for a class of nodes termed \"semi-algebraic gates\" which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, sum-product networks, and boosted decision trees (in this last case with a stronger separation: @math total tree nodes are required).",
"Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1 n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1 n sup 2 d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >",
"Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of @math uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on @math ) require @math neurons while deep networks (i.e., networks whose depth grows with @math ) require @math neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and binary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU.",
"Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.",
"We establish @math and @math error bounds for functions of many variables that are approximated by linear combinations of ReLU (rectified linear unit) and squared ReLU ridge functions with @math and @math controls on their inner and outer parameters. With the squared ReLU ridge function, we show that the @math approximation error is inversely proportional to the inner layer @math sparsity and it need only be sublinear in the outer layer @math sparsity. Our constructions are obtained using a variant of the Jones-Barron probabilistic method, which can be interpreted as either stratified sampling with proportionate allocation or two-stage cluster sampling. We also provide companion error lower bounds that reveal near optimality of our constructions. Despite the sparsity assumptions, we showcase the richness and flexibility of these ridge combinations by defining a large family of functions, in terms of certain spectral conditions, that are particularly well approximated by them.",
"",
"An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.",
"We consider the problem of approximating a smooth target function and its derivatives by networks involving superpositions and translations of a fixed activation function. The approximation is with respect to the sup-norm and the rate is shown to be of order O(n sup -1 2 ); that is, the rate is independent of the dimension d. The results apply to neural and wavelet networks and extend the work of Barren(see Proc. 7th Yale Workshop on Adaptive and Learning Systems, May, 1992, and ibid., vol.39, p.930, 1993). The approach involves probabilistic methods based on central limit theorems for empirical processes indexed by classes of functions. >",
"Deep neural nets have caused a revolution in many classification tasks. A related ongoing revolution---also theoretically not understood---concerns their ability to serve as generative models for complicated types of data such as images and texts. These models are trained using ideas like variational autoencoders and Generative Adversarial Networks. We take a first cut at explaining the expressivity of multilayer nets by giving a sufficient criterion for a function to be approximable by a neural network with @math hidden layers. A key ingredient is Barron's Theorem Barron1993 , which gives a Fourier criterion for approximability of a function by a neural network with 1 hidden layer. We show that a composition of @math functions which satisfy certain Fourier conditions (\"Barron functions\") can be approximated by a @math -layer neural network. For probability distributions, this translates into a criterion for a probability distribution to be approximable in Wasserstein distance---a natural metric on probability distributions---by a neural network applied to a fixed base distribution (e.g., multivariate gaussian). Building up recent lower bound work, we also give an example function that shows that composition of Barron functions is more expressive than Barron functions alone.",
"In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.",
"Estimation of functions of @math variables is considered using ridge combinations of the form @math where the activation function @math is a function with bounded value and derivative. These include single-hidden layer neural networks, polynomials, and sinusoidal models. From a sample of size @math of possibly noisy values at random sites @math , the minimax mean square error is examined for functions in the closure of the @math hull of ridge functions with activation @math . It is shown to be of order @math to a fractional power (when @math is of smaller order than @math ), and to be of order @math to a fractional power (when @math is of larger order than @math ). Dependence on constraints @math and @math on the @math norms of inner parameter @math and outer parameter @math , respectively, is also examined. Also, lower and upper bounds on the fractional power are given. The heart of the analysis is development of information-theoretic packing numbers for these classes of functions.",
"While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal."
]
} |
1710.02196 | 2763374915 | Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. | References @cite_11 @cite_8 @cite_14 @cite_34 have studied the convergence of the local search algorithms such as gradient descent methods to the global optimum of the neural network optimization with zero hidden neurons and a single output. In this case, the loss function of the neural network optimization has a single local optimizer which is the same as the global optimum. However, for neural networks with hidden neurons, the landscape of the loss function is more complicated than the case with no hidden neurons. | {
"cite_N": [
"@cite_8",
"@cite_14",
"@cite_34",
"@cite_11"
],
"mid": [
"2949979820",
"",
"2613481513",
"2514392868"
],
"abstract": [
"Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the con- cept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.",
"",
"In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form @math with @math denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialization at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.",
"Most high-dimensional estimation and prediction methods propose to minimize a cost function (empirical risk) that is written as a sum of losses associated to each data point. In this paper we focus on the case of non-convex losses, which is practically important but still poorly understood. Classical empirical process theory implies uniform convergence of the empirical risk to the population risk. While uniform convergence implies consistency of the resulting M-estimator, it does not ensure that the latter can be computed efficiently. In order to capture the complexity of computing M-estimators, we propose to study the landscape of the empirical risk, namely its stationary points and their properties. We establish uniform convergence of the gradient and Hessian of the empirical risk to their population counterparts, as soon as the number of samples becomes larger than the number of unknown parameters (modulo logarithmic factors). Consequently, good properties of the population risk can be carried to the empirical risk, and we can establish one-to-one correspondence of their stationary points. We demonstrate that in several problems such as non-convex binary classification, robust regression, and Gaussian mixture model, this result implies a complete characterization of the landscape of the empirical risk, and of the convergence properties of descent algorithms. We extend our analysis to the very high-dimensional setting in which the number of parameters exceeds the number of samples, and provide a characterization of the empirical risk landscape under a nearly information-theoretically minimal condition. Namely, if the number of samples exceeds the sparsity of the unknown parameters vector (modulo logarithmic factors), then a suitable uniform convergence result takes place. We apply this result to non-convex binary classification and robust regression in very high-dimension."
]
} |
1710.02196 | 2763374915 | Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. | Several work has studied the risk landscape of neural network optimizations for more complex structures under various model assumptions @cite_24 @cite_37 @cite_40 @cite_1 @cite_19 @cite_4 @cite_9 @cite_10 @cite_25 @cite_33 @cite_21 . Reference @cite_24 shows that in the linear neural network optimization, the population risk landscape does not have any bad local optima. Reference @cite_37 extends these results and provides necessary and sufficient conditions for a critical point of the loss function to be a global minimum. Reference @cite_40 shows that for a two-layer neural network with leaky activation functions, the gradient descent method on a modified loss function converges to a global optimizer of the modified loss function which can be different from the original global optimum. Under an independent activations assumption, reference @cite_1 simplifies the loss function of a neural network optimization to a polynomial and shows that local optimizers obtain approximately the same objective values as the global ones. This result has been extended by reference @cite_24 to show that all local minima are global minima in a nonlinear network. However, the underlying assumption of having independent activations at neurons usually are not satisfied in practice. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_10",
"@cite_25"
],
"mid": [
"2736030546",
"2593709294",
"2618398196",
"2625063094",
"1839868949",
"1899249567",
"2963446085",
"2750924312",
"2399994860",
"2952318479",
"2587741277"
],
"abstract": [
"We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.",
"In this paper, we explore theoretical properties of training a two-layered ReLU network @math with centered @math -dimensional spherical Gaussian input @math ( @math =ReLU). We train our network with gradient descent on @math to mimic the output of a teacher network with the same architecture and fixed parameters @math . We show that its population gradient has an analytical formula, leading to interesting theoretical analysis of critical points and convergence behaviors. First, we prove that critical points outside the hyperplane spanned by the teacher parameters (\"out-of-plane\") are not isolated and form manifolds, and characterize in-plane critical-point-free regions for two ReLU case. On the other hand, convergence to @math for one ReLU node is guaranteed with at least @math probability, if weights are initialized randomly with standard deviation upper-bounded by @math , consistent with empirical practice. For network with many ReLU nodes, we prove that an infinitesimal perturbation of weight initialization results in convergence towards @math (or its permutation), a phenomenon known as spontaneous symmetric-breaking (SSB) in physics. We assume no independence of ReLU activations. Simulation verifies our findings.",
"In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called \"identity mapping\". We prove that, if input follows from Gaussian distribution, with standard @math initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the \"identity mapping\" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in \"two phases\": In phase I, the gradient points to the wrong direction, however, a potential function @math gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.",
"In this paper, we consider regression problems with one-hidden-layer neural networks (1NNs). We distill some properties of activation functions that lead to @math in the neighborhood of the ground-truth parameters for the 1NN squared-loss objective. Most popular nonlinear activation functions satisfy the distilled properties, including rectified linear units (ReLUs), leaky ReLUs, squared ReLUs and sigmoids. For activation functions that are also smooth, we show @math guarantees of gradient descent under a resampling rule. For homogeneous activations, we show tensor methods are able to initialize the parameters to fall into the local strong convexity region. As a result, tensor initialization followed by gradient descent is guaranteed to recover the ground truth with sample complexity @math and computational complexity @math for smooth homogeneous activations with high probability, where @math is the dimension of the input, @math ( @math ) is the number of hidden nodes, @math is a conditioning property of the ground-truth parameter matrix between the input layer and the hidden layer, @math is the targeted precision and @math is the number of samples. To the best of our knowledge, this is the first work that provides recovery guarantees for 1NNs with both sample complexity and computational complexity @math in the input dimension and @math in the precision.",
"Author(s): Janzamin, M; Sedghi, H; Anandkumar, A | Abstract: Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.",
"We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between largeand small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.",
"In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.",
"In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of @math , where @math is ReLU nonlinearity. We assume that the input @math follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters @math using @math loss. We first show that when @math , the nonlinear dynamics can be written in close form, and converges to @math with at least @math probability, if random weight initializations of proper standard derivation ( @math ) is used, verifying empirical practice. For networks with many ReLU nodes ( @math ), we apply our close form dynamics and prove that when the teacher parameters @math forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to @math without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with @math loss. Simulations verify our theoretical analysis.",
"We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization. We then extend these results to the case of more than one hidden layer. Our theoretical guarantees assume essentially nothing on the training data, and are verified numerically. These results suggest why the highly non-convex loss of such MNNs can be easily optimized using local updates (e.g., stochastic gradient descent), as observed empirically.",
"Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non-convex optimization problem. The key question is then under what conditions can one prove that optimization will succeed. Here we provide a strong result of this kind. We consider a neural net with one hidden layer and a convolutional structure with no overlap and a ReLU activation function. For this architecture we show that learning is NP-complete in the general case, but that when the input distribution is Gaussian, gradient descent converges to the global optimum in polynomial time. To the best of our knowledge, this is the first global optimality guarantee of gradient descent on a convolutional neural network with ReLU activations.",
"We study the efficacy of learning neural networks with neural networks by the (stochastic) gradient descent method. While gradient descent enjoys empirical success in a variety of applications, there is a lack of theoretical guarantees that explains the practical utility of deep learning. We focus on two-layer neural networks with a linear activation on the output node. We show that under some mild assumptions and certain classes of activation functions, gradient descent does learn the parameters of the neural network and converges to the global minima. Using a node-wise gradient descent algorithm, we show that learning can be done in finite, sometimes @math , time and sample complexity."
]
} |
1710.02196 | 2763374915 | Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. | References @cite_19 @cite_4 @cite_9 consider a two-layer neural network with Gaussian inputs under a matched (realizable) model where the output is generated from a network with planted weights. Moreover, they assume the number of neurons in the hidden layer is smaller than the dimension of inputs. This critical assumption makes the loss function positive-definite in a small neighborhood near the global optimum. Then, reference @cite_9 provides a tensor-based method to initialize the local search algorithm in that neighborhood which guarantees its convergence to the global optimum. In our problem formulation, the number of hidden neurons can be larger than the dimension of inputs as it is often the case in practice. Moreover, we characterize risk landscapes for a certain family of neural networks in all parameter regions, not just around the global optimizer. This can guide us towards understanding the reason behind the success of local search methods in practice. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_4"
],
"mid": [
"2750924312",
"2625063094",
"2593709294"
],
"abstract": [
"In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of @math , where @math is ReLU nonlinearity. We assume that the input @math follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters @math using @math loss. We first show that when @math , the nonlinear dynamics can be written in close form, and converges to @math with at least @math probability, if random weight initializations of proper standard derivation ( @math ) is used, verifying empirical practice. For networks with many ReLU nodes ( @math ), we apply our close form dynamics and prove that when the teacher parameters @math forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to @math without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with @math loss. Simulations verify our theoretical analysis.",
"In this paper, we consider regression problems with one-hidden-layer neural networks (1NNs). We distill some properties of activation functions that lead to @math in the neighborhood of the ground-truth parameters for the 1NN squared-loss objective. Most popular nonlinear activation functions satisfy the distilled properties, including rectified linear units (ReLUs), leaky ReLUs, squared ReLUs and sigmoids. For activation functions that are also smooth, we show @math guarantees of gradient descent under a resampling rule. For homogeneous activations, we show tensor methods are able to initialize the parameters to fall into the local strong convexity region. As a result, tensor initialization followed by gradient descent is guaranteed to recover the ground truth with sample complexity @math and computational complexity @math for smooth homogeneous activations with high probability, where @math is the dimension of the input, @math ( @math ) is the number of hidden nodes, @math is a conditioning property of the ground-truth parameter matrix between the input layer and the hidden layer, @math is the targeted precision and @math is the number of samples. To the best of our knowledge, this is the first work that provides recovery guarantees for 1NNs with both sample complexity and computational complexity @math in the input dimension and @math in the precision.",
"In this paper, we explore theoretical properties of training a two-layered ReLU network @math with centered @math -dimensional spherical Gaussian input @math ( @math =ReLU). We train our network with gradient descent on @math to mimic the output of a teacher network with the same architecture and fixed parameters @math . We show that its population gradient has an analytical formula, leading to interesting theoretical analysis of critical points and convergence behaviors. First, we prove that critical points outside the hyperplane spanned by the teacher parameters (\"out-of-plane\") are not isolated and form manifolds, and characterize in-plane critical-point-free regions for two ReLU case. On the other hand, convergence to @math for one ReLU node is guaranteed with at least @math probability, if weights are initialized randomly with standard deviation upper-bounded by @math , consistent with empirical practice. For network with many ReLU nodes, we prove that an infinitesimal perturbation of weight initialization results in convergence towards @math (or its permutation), a phenomenon known as spontaneous symmetric-breaking (SSB) in physics. We assume no independence of ReLU activations. Simulation verifies our findings."
]
} |
1710.02196 | 2763374915 | Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. | For a neural network with a single non-overlapping convolutional layer, reference @cite_10 shows that all local optimizers of the loss function are global optimizers as well. They also show that in the overlapping case, the problem is NP-hard when inputs are not Gaussian. Moreover, reference @cite_25 studies this problem with non-standard activation functions, while reference @cite_33 considers the case where the weights from the hidden layer to the output are close to the identity. Other related works include improper learning models using kernel based approaches @cite_6 @cite_38 and a method of moments estimator using tensor decomposition @cite_21 . | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_21",
"@cite_6",
"@cite_10",
"@cite_25"
],
"mid": [
"2952110295",
"2618398196",
"1839868949",
"2953243802",
"2952318479",
"2587741277"
],
"abstract": [
"We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has @math hidden layers and that the @math -norm of the incoming weights of any neuron is bounded by @math . We present a kernel-based method, such that with probability at least @math , it learns a predictor whose generalization error is at most @math worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in @math , where @math is a function depending on @math and on the activation function, independent of the number of neurons. The algorithm applies to both sigmoid-like activation functions and ReLU-like activation functions. It implies that any sufficiently sparse neural network is learnable in polynomial time.",
"In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called \"identity mapping\". We prove that, if input follows from Gaussian distribution, with standard @math initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the \"identity mapping\" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in \"two phases\": In phase I, the gradient points to the wrong direction, however, a potential function @math gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.",
"Author(s): Janzamin, M; Sedghi, H; Anandkumar, A | Abstract: Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.",
"We give the first dimension-efficient algorithms for learning Rectified Linear Units (ReLUs), which are functions of the form @math with @math . Our algorithm works in the challenging Reliable Agnostic learning model of Kalai, Kanade, and Mansour (2009) where the learner is given access to a distribution @math on labeled examples but the labeling may be arbitrary. We construct a hypothesis that simultaneously minimizes the false-positive rate and the loss on inputs given positive labels by @math , for any convex, bounded, and Lipschitz loss function. The algorithm runs in polynomial-time (in @math ) with respect to any distribution on @math (the unit sphere in @math dimensions) and for any error parameter @math (this yields a PTAS for a question raised by F. Bach on the complexity of maximizing ReLUs). These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where @math must be @math and strong assumptions are required on the marginal distribution. We can compose our results to obtain the first set of efficient algorithms for learning constant-depth networks of ReLUs. Our techniques combine kernel methods and polynomial approximations with a \"dual-loss\" approach to convex programming. As a byproduct we obtain a number of applications including the first set of efficient algorithms for \"convex piecewise-linear fitting\" and the first efficient algorithms for noisy polynomial reconstruction of low-weight polynomials on the unit sphere.",
"Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non-convex optimization problem. The key question is then under what conditions can one prove that optimization will succeed. Here we provide a strong result of this kind. We consider a neural net with one hidden layer and a convolutional structure with no overlap and a ReLU activation function. For this architecture we show that learning is NP-complete in the general case, but that when the input distribution is Gaussian, gradient descent converges to the global optimum in polynomial time. To the best of our knowledge, this is the first global optimality guarantee of gradient descent on a convolutional neural network with ReLU activations.",
"We study the efficacy of learning neural networks with neural networks by the (stochastic) gradient descent method. While gradient descent enjoys empirical success in a variety of applications, there is a lack of theoretical guarantees that explains the practical utility of deep learning. We focus on two-layer neural networks with a linear activation on the output node. We show that under some mild assumptions and certain classes of activation functions, gradient descent does learn the parameters of the neural network and converges to the global minima. Using a node-wise gradient descent algorithm, we show that learning can be done in finite, sometimes @math , time and sample complexity."
]
} |
1710.01916 | 2762741980 | The visual recognition of transitive actions comprising human-object interactions is a key component enabling artificial systems to operate in natural environments. This challenging task requires, in addition to the recognition of articulated body actions, the extraction of semantic elements from the scene such as the identity of the manipulated objects. In this paper, we present a self-organizing neural network for the recognition of human-object interactions from RGB-D videos. Our model consists of a hierarchy of Grow When Required (GWR) networks which learn prototypical representations of body motion patterns and objects, also accounting for the development of action-object mappings in an unsupervised fashion. To demonstrate this ability, we report experimental results on a dataset of daily activities collected for the purpose of this study as well as on a publicly available benchmark dataset. In line with neurophysiological studies, our self-organizing architecture shows higher neural activation for congruent action-object pairs learned during training sessions with respect to artificially created incongruent ones. We show that our model achieves good classification accuracy on the benchmark dataset in an unsupervised fashion, showing competitive performance with respect to strictly supervised state-of-the-art approaches. | One important goal of human activity recognition in machine learning and computer vision is to automatically detect and analyze human activities from the information acquired from visual sensing devices such as RGB cameras and range sensors. The literature suggests a conceptual categorization of human activities into four different levels depending on the complexity: gestures, actions, interactions, and group activities @cite_38 @cite_14 @cite_24 . Gestures are elementary movements of a person's body part and are the atomic components describing the meaningful motion of a person, e.g. or . Actions are single-person activities that may be composed of multiple gestures such as and . Interactions are human activities that involve a person and one (or more) objects. For instance, a is a human-object interaction. Finally, group activities are the activities performed by groups composed of multiple persons or objects, e.g. . | {
"cite_N": [
"@cite_24",
"@cite_38",
"@cite_14"
],
"mid": [
"2056339039",
"1983705368",
"2038746778"
],
"abstract": [
"Abstract Human activity recognition has been an important area of computer vision research since the 1980s. Various approaches have been proposed with a great portion of them addressing this issue via conventional cameras. The past decade has witnessed a rapid development of 3D data acquisition techniques. This paper summarizes the major techniques in human activity recognition from 3D data with a focus on techniques that use depth data. Broad categories of algorithms are identified based upon the use of different features. The pros and cons of the algorithms in each category are analyzed and the possible direction of future research is indicated.",
"Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.",
"Abstract This paper presents an overview of state-of-the-art methods in activity recognition using semantic features. Unlike low-level features, semantic features describe inherent characteristics of activities. Therefore, semantics make the recognition task more reliable especially when the same actions look visually different due to the variety of action executions. We define a semantic space including the most popular semantic features of an action namely the human body (pose and poselet), attributes, related objects, and scene context. We present methods exploiting these semantic features to recognize activities from still images and video data as well as four groups of activities: atomic actions, people interactions, human–object interactions, and group activities. Furthermore, we provide potential applications of semantic approaches along with directions for future research."
]
} |
1710.01916 | 2762741980 | The visual recognition of transitive actions comprising human-object interactions is a key component enabling artificial systems to operate in natural environments. This challenging task requires, in addition to the recognition of articulated body actions, the extraction of semantic elements from the scene such as the identity of the manipulated objects. In this paper, we present a self-organizing neural network for the recognition of human-object interactions from RGB-D videos. Our model consists of a hierarchy of Grow When Required (GWR) networks which learn prototypical representations of body motion patterns and objects, also accounting for the development of action-object mappings in an unsupervised fashion. To demonstrate this ability, we report experimental results on a dataset of daily activities collected for the purpose of this study as well as on a publicly available benchmark dataset. In line with neurophysiological studies, our self-organizing architecture shows higher neural activation for congruent action-object pairs learned during training sessions with respect to artificially created incongruent ones. We show that our model achieves good classification accuracy on the benchmark dataset in an unsupervised fashion, showing competitive performance with respect to strictly supervised state-of-the-art approaches. | Understanding human-object interactions requires the integration of complex relationships between features of human body action and object identity. From a computational perspective, it is not clear how to link architectures specialized in object recognition and motion recognition, e.g., how to bind different types of objects and hand arm movements. Recently, proposed a physiologically inspired model for the recognition of transitive hand-actions such as grasping, placing, and holding. Nevertheless, this model works with visual data acquired in a constrained environment, i.e., videos showing a hand grasping balls of different sizes with a uniform background, with the role of the identity of the object in transitive action recognition being unclear. Similar models have been tested in robotics, accomplishing the recognition of grip apertures, affordances, or hand action classification @cite_46 @cite_23 . | {
"cite_N": [
"@cite_46",
"@cite_23"
],
"mid": [
"2080114002",
"2066830245"
],
"abstract": [
"This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, grip aperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring grip aperture in an observer-independent way) for extracting grip aperture in a view-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that grip aperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute grip aperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems.",
"Typical patterns of hand-joint covariation arising in the context of grasping actions enable one to provide simplified descriptions of these actions in terms of small sets of hand-joint parameters. The computational model of mirror mechanisms introduced here hypothesizes that mirror neurons are crucially involved in coding and making this simplified motor information available for both action recognition and control processes. In particular, grasping action recognition processes are modeled in terms of a visuo-motor loop enabling one to make iterated use of mirror-coded motor information. In simulation experiments concerning the classification of reach-to-grasp actions, mirror-coded information was found to simplify the processing of visual inputs and to improve action recognition results with respect to recognition procedures that are solely based on visual processing. The visuo-motor loop involved in action recognition is a distinctive feature of this model which is coherent with the direct matching hypothesis. Moreover, the visuo-motor loop sets the model introduced here apart from those computational models that identify mirror neuron activity in action observation with the final outcome of computational processes unidirectionally flowing from sensory (and usually visual) to motor systems."
]
} |
1710.01985 | 2761300635 | Many data sources can be interpreted as time-series, and a key problem is to identify which pairs out of a large collection of signals are highly correlated. We expect that there will be few, large, interesting correlations, while most signal pairs do not have any strong correlation. We abstract this as the problem of identifying the highly correlated pairs in a collection of n mostly pairwise uncorrelated random variables, where observations of the variables arrives as a stream. Dimensionality reduction can remove dependence on the number of observations, but further techniques are required to tame the quadratic (in n) cost of a search through all possible pairs. We develop a new algorithm for rapidly finding large correlations based on sketch techniques with an added twist: we quickly generate sketches of random combinations of signals, and use these in concert with ideas from coding theory to decode the identity of correlated pairs. We prove correctness and compare performance and effectiveness with the best LSH (locality sensitive hashing) based approach. | The best known Euclidean LSH algorithms have @math (data independent, @cite_12 ) and @math (data dependent, @cite_1 ). This gives us a time and space cost of @math for fixed @math , even if @math goes to @math and @math goes to 0. | {
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"2953209208",
"2147717514"
],
"abstract": [
"We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an @math -point data set in a @math -dimensional space our data structure achieves query time @math and space @math , where @math for the Euclidean space and approximation @math . For the Hamming space, we obtain an exponent of @math . Our result completes the direction set forth in [AINR14] who gave a proof-of-concept that data-dependent hashing can outperform classical Locality Sensitive Hashing (LSH). In contrast to [AINR14], the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures [IM98,AI06] for all approximation factors @math . From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.",
"We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers."
]
} |
1710.01985 | 2761300635 | Many data sources can be interpreted as time-series, and a key problem is to identify which pairs out of a large collection of signals are highly correlated. We expect that there will be few, large, interesting correlations, while most signal pairs do not have any strong correlation. We abstract this as the problem of identifying the highly correlated pairs in a collection of n mostly pairwise uncorrelated random variables, where observations of the variables arrives as a stream. Dimensionality reduction can remove dependence on the number of observations, but further techniques are required to tame the quadratic (in n) cost of a search through all possible pairs. We develop a new algorithm for rapidly finding large correlations based on sketch techniques with an added twist: we quickly generate sketches of random combinations of signals, and use these in concert with ideas from coding theory to decode the identity of correlated pairs. We prove correctness and compare performance and effectiveness with the best LSH (locality sensitive hashing) based approach. | As the algorithm only requires access to matched columns of @math and @math one at a time, in the special case of @math this approach can be used in the model to build a sketch of @math . In particular, we can build a sketch of the covariance matrix @math in this streaming model, from input observation matrix @math , with update time cost @math ( @math amortized, since @math dominates @math ) and space usage @math . To recover dominant entries from these sketches, Pagh describes an approach (building on @cite_9 ) that uses @math sketches of sub-matrices of @math , along with error correcting codes, to discover the identity of a small number of entries which dominate the Frobenius norm of the product, with high probability. This process runs in @math time and space. Putting these pieces together provides a solution to a covariance outliers version of our problem in the model. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2020301027"
],
"abstract": [
"A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by ^x=D(Φ x), which must satisfy ||x - x||2≤ C ||x - xk||2, where xk denotes the optimal k-term approximation to x. (The output ^x may have more than k terms). For each vector x, the system must succeed with probability at least 3 4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N k)) measurements--matching a lower bound, up to a constant factor--and decoding time k log O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting ^x = D(Φ x + ν2), and for properly normalized ν, we get [||^x - x||22 ≤ (1+e)||ν1||22 + e||ν2||22,] using O((k e)log(N k)) measurements and (k e)logO(1)(N) time for decoding."
]
} |
1710.01952 | 2752404600 | We present a new compressed representation of free trajectories of moving objects. It combines a partial-sums-based structure that retrieves in constant time the position of the object at any instant, with a hierarchical minimum-bounding-boxes representation that allows determining if the object is seen in a certain rectangular area during a time period. Combined with spatial snapshots at regular intervals, the representation is shown to outperform classical ones by orders of magnitude in space, and also to outperform previous compressed representations in time performance, when using the same amount of space. | A lossy way to reduce size is to generate a new trajectory that approximates the original one, by keeping the most representative points. The best known method of this type is the Douglas-Peucker algorithm @cite_11 . Other strategies record speed and direction, discarding points that can be reasonably predicted with this data @cite_19 . A lossless way to reduce space is to use differential encodings of the consecutive values @math , @math , and time @cite_12 @cite_14 @cite_1 . | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_12",
"@cite_11"
],
"mid": [
"1647934908",
"1969839198",
"1973355601",
"2065663378",
"1981934656"
],
"abstract": [
"The need to store vast amounts of trajectory data becomes more problematic as GPS-based tracking devices become increasingly prevalent. There are two commonly used approaches for compressing trajectory data. The first is the line generalisation approach which aims to fit the trajectory using a series of line segments. The second is to store the initial data point and then store the remaining data points as a sequence of successive deltas. The line generalisation approach is only effective when given a large error margin, and existing delta compression algorithms do not permit lossy compression. Consequently there is an uncovered gap in which users expect a good compression ratio by giving away only a small error margin. This paper fills this gap by extending the delta compression approach to allow users to trade a small maximum error margin for large improvements to the compression ratio. In addition, alternative techniques are extensively studied for the following two key components of any delta-based approach: predicting the value of the next data point and encoding leading zeros. We propose a new trajectory compression system called Trajic based on the results of the study. Experimental results show that Trajic produces 1.5 times smaller compressed data than a straight-forward delta compression algorithm for lossless compression and produces 9.4 times smaller compressed data than a state-of-the-art line generalisation algorithm when using a small maximum error bound of 1 meter.",
"The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.",
"In this work we investigate the quality bounds for the data stored in Moving Objects Databases (MOD) in the settings in which mobile units can perform an on-board data reduction in real time. It has been demonstrated that line simplification techniques, when properly applied to the large volumes of data pertaining to the past trajectories of the moving objects. result in substantial storage savings while guaranteeing deterministic error bounds to the queries posed to the MOD. On the other hand. it has also been demonstrated that if moving objects establish an agreement with the MOD regarding the (im)precision tolerance significant savings can be achieved in transmission when updating the location-in-time information. In this paper we take a first step towards analyzing the quality of the history in making in MOD by correlating the (impact of the) agreement between the server and the moving objects for on-line updates in real time with the error bounds of the data that becomes a representation of the past trajectories as time evolves.",
"The rise of GPS and broadband-speed wireless devices has led to tremendous excitement about a range of applications broadly characterized as “location based services”. Current database storage systems, however, are inadequate for manipulating the very large and dynamic spatio-temporal data sets required to support such services. Proposals in the literature either present new indices without discussing how to cluster data, potentially resulting in many disk seeks for lookups of densely packed objects, or use static quadtrees or other partitioning structures, which become rapidly suboptimal as the data or queries evolve. As a result of these performance limitations, we built TrajStore, a dynamic storage system optimized for efficiently retrieving all data in a particular spatiotemporal region. TrajStore maintains an optimal index on the data and dynamically co-locates and compresses spatially and temporally adjacent segments on disk. By letting the storage layer evolve with the index, the system adapts to incoming queries and data and is able to answer most queries via a very limited number of I Os, even when the queries target regions containing hundreds or thousands of different trajectories.",
"All digitizing methods, as a general rule, record lines with far more data than is necessary for accurate graphic reproduction or for computer analysis. Two algorithms to reduce the number of points required to represent the line and, if desired, produce caricatures, are presented and compared with the most promising methods so far suggested. Line reduction will form a major part of automated generalization. Regle generale, les methodes numeriques enregistrent des lignes avec beaucoup plus de donnees qu'il n'est necessaire a la reproduction graphique precise ou a la recherche par ordinateur. L'auteur presente deux algorithmes pour reduire le nombre de points necessaires pour representer la ligne et produire des caricatures si desire, et les compare aux methodes les plus prometteuses suggerees jusqu'ici. La reduction de la ligne constituera une partie importante de la generalisation automatique."
]
} |
1710.01952 | 2752404600 | We present a new compressed representation of free trajectories of moving objects. It combines a partial-sums-based structure that retrieves in constant time the position of the object at any instant, with a hierarchical minimum-bounding-boxes representation that allows determining if the object is seen in a certain rectangular area during a time period. Combined with spatial snapshots at regular intervals, the representation is shown to outperform classical ones by orders of magnitude in space, and also to outperform previous compressed representations in time performance, when using the same amount of space. | Spatio-temporal indexes can be classified into three types. The first is a classic multidimensional spatial index, usually the R-tree, augmented with a temporal dimension. For example, the 3DR-tree @cite_5 uses three-dimensional Minimum Bounding Boxes (MBBs), where the third dimension is the time, to index segments of trajectories. A second approach is the multiversion R-trees, which creates an R-tree for each timestamp and a B-tree to select the relevant R-trees. The best known index of this family is the MV3R-tree @cite_18 . The third type of index partitions the space statically, and then a temporal index is built for each of the spatial partitions @cite_7 . | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_7"
],
"mid": [
"1995940203",
"1553193704",
"2100946521"
],
"abstract": [
"Multimedia applications usually involve a large number of multimedia objects (texts, images, sounds, etc.). An important issue in this context is the specification of spatial and temporal relationships among these objects. In this paper we define such a model, based on a set of spatial and temporal relationships between objects participating in multimedia applications. Our work exploits existing approaches for spatial and temporal relationships. We extend these relationships in order to cover the specific requirements of multimedia applications and we integrate the results in a uniform framework for spatio-temporal composition representation. Another issue is the efficient handling of queries related to the spatio-temporal relationships among the objects during the authoring process. Such queries may be very costly and appropriate indexing schemes are needed so as to handle them efficiently. We propose efficient such schemes, based on multidimensional (spatial) data structures, for large multimedia applications that involve thousands of objects. Evaluation models of the proposed schemes are also presented, as well as hints for the selection of the most appropriate one, according to the multimedia author's requirements.",
"",
"With the rapid increase in the use of inexpensive, location-aware sensors in a variety of new applications, large amounts of time-sequenced location data will soon be accumulated. Efficient indexing techniques for managing these large volumes of trajectory data sets are urgently needed. The key requirements for a good trajectory indexing technique is that it must support both searches and inserts efficiently. This paper proposes a new indexing mechanism called SETI, a Scalable and Efficient Trajectory Index, that meets these requirements. SETI uses a simple two-level index structure to decouple the indexing of the spatial and the temporal dimensions. This decoupling makes both searches and inserts very efficient. Based on an actual implementation, we demonstrate that SETI clearly outperforms two previously proposed trajectory indexing mechanisms, namely the 3D R-tree and the TB-tree. Unlike previously proposed trajectory indexing structures, SETI is a logical indexing structure that uses existing spatial indexing structures, such as R-trees, without any modifications. Consequently, DBMSs that currently support R-trees can easily implement SETI, making it a both a practical and an efficient choice for indexing trajectory data sets."
]
} |
1710.01952 | 2752404600 | We present a new compressed representation of free trajectories of moving objects. It combines a partial-sums-based structure that retrieves in constant time the position of the object at any instant, with a hierarchical minimum-bounding-boxes representation that allows determining if the object is seen in a certain rectangular area during a time period. Combined with spatial snapshots at regular intervals, the representation is shown to outperform classical ones by orders of magnitude in space, and also to outperform previous compressed representations in time performance, when using the same amount of space. | The closest predecessor of our work, GraCT @cite_9 , assumes regular timestamps and stores trajectories using two components. At regular time instants, it represents the position of all the objects in a structure called . The positions of objects between snapshots are represented in a structure called . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2521266394"
],
"abstract": [
"Much research has been published about trajectory management on the ground or at the sea, but compression or indexing of flight trajectories have usually been less explored. However, air traffic management is a challenge because airspace is becoming more and more congested, and large flight data collections must be preserved and exploited for varied purposes. This paper proposes 3DGraCT, a new method for representing these flight trajectories. It extends the GraCT compact data structure to cope with a third dimension (altitude), while retaining its space time complexities. 3DGraCT improves space requirements of traditional spatio-temporal data structures by two orders of magnitude, being competitive for the considered types of queries, even leading the comparison for a particular one."
]
} |
1710.01952 | 2752404600 | We present a new compressed representation of free trajectories of moving objects. It combines a partial-sums-based structure that retrieves in constant time the position of the object at any instant, with a hierarchical minimum-bounding-boxes representation that allows determining if the object is seen in a certain rectangular area during a time period. Combined with spatial snapshots at regular intervals, the representation is shown to outperform classical ones by orders of magnitude in space, and also to outperform previous compressed representations in time performance, when using the same amount of space. | Let us denote @math the snapshot representing the position of all the objects at timestamp @math . Between two consecutive snapshots @math and @math , there is a log for each object, which is denoted @math , being @math the identifier of the object. The log stores the differences of positions compressed with RePair @cite_15 , a grammar-based compressor. In order to speed up the queries over the resulting sequence, the nonterminals are enriched with additional information, mainly the MBB of the trajectory segment encoded by the nonterminal. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2602771387"
],
"abstract": [
"Dictionary-based modeling is a mechanism used in many practical compression schemes. In most implementations of dictionary-based compression the encoder operates on-line, incrementally inferring its dictionary of available phrases from previous parts of the message. An alternative approach is to use the full message to infer a complete dictionary in advance, and include an explicit representation of the dictionary as part of the compressed message. In this investigation, we develop a compression scheme that is a combination of a simple but powerful phrase derivation method and a compact dictionary encoding. The scheme is highly efficient, particularly in decompression, and has characteristics that make it a favorable choice when compressed data is to be searched directly. We describe data structures and algorithms that allow our mechanism to operate in linear time and space."
]
} |
1710.01952 | 2752404600 | We present a new compressed representation of free trajectories of moving objects. It combines a partial-sums-based structure that retrieves in constant time the position of the object at any instant, with a hierarchical minimum-bounding-boxes representation that allows determining if the object is seen in a certain rectangular area during a time period. Combined with spatial snapshots at regular intervals, the representation is shown to outperform classical ones by orders of magnitude in space, and also to outperform previous compressed representations in time performance, when using the same amount of space. | ScdcCT was implemented as a classical compressed baseline to compare against GraCT @cite_9 . It uses the same components, snapshots and logs, but the logs are compressed with differences and not with grammars. The differences are compressed using @math -Dense Codes @cite_2 , a fast-to-decode variable-length code that has low redundancy over the zero-order empirical entropy of the sequence. This exploits the fact that short movements to contiguous cells are more frequent than movements to distant cells. | {
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"2521266394",
"2069906566"
],
"abstract": [
"Much research has been published about trajectory management on the ground or at the sea, but compression or indexing of flight trajectories have usually been less explored. However, air traffic management is a challenge because airspace is becoming more and more congested, and large flight data collections must be preserved and exploited for varied purposes. This paper proposes 3DGraCT, a new method for representing these flight trajectories. It extends the GraCT compact data structure to cope with a third dimension (altitude), while retaining its space time complexities. 3DGraCT improves space requirements of traditional spatio-temporal data structures by two orders of magnitude, being competitive for the considered types of queries, even leading the comparison for a particular one.",
"Variants of Huffman codes where words are taken as the source symbols are currently the most attractive choices to compress natural language text databases. In particular, Tagged Huffman Code by offers fast direct searching on the compressed text and random access capabilities, in exchange for producing around 11 larger compressed files. This work describes End-Tagged Dense Code and (s, c)-Dense Code, two new semistatic statistical methods for compressing natural language texts. These techniques permit simpler and faster encoding and obtain better compression ratios than Tagged Huffman Code, while maintaining its fast direct search and random access capabilities. We show that Dense Codes improve Tagged Huffman Code compression ratio by about 10 , reaching only 0.6 overhead over the optimal Huffman compression ratio. Being simpler, Dense Codes are generated 45 to 60 faster than Huffman codes. This makes Dense Codes a very attractive alternative to Huffman code variants for various reasons: they are simpler to program, faster to build, of almost optimal size, and as fast and easy to search as the best Huffman variants, which are not so close to the optimal size."
]
} |
1710.01802 | 2762547056 | In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate. | This process analyses an input raster floor plan image and extracts layout information, which is referred to as a parse process . Referring to Yin s survey @cite_30 , the challenges in this step are explained in Table . Graphical document analysis technology is required to analyse and parse image floor plans, which includes two main steps: (1) removing noise ,such as text and annotation; and (2) graphical symbol recognition. The cleaning step focuses on removing noise and other irrelevant information to improve image quality. In the graphical symbol recognition step, the system categorises the recognised symbols by identifying certain information, including location, orientation and scale. Compared to other graphical documents, floor plans have certain distinguishable features. For example, various line shapes (curved or straight) represent walls in floor plans. Another difference is that the architectural symbols are made up of simple geometric primitives. Typically, to handle such input, graphics recognition is integrated with vectorisation. (Table ) | {
"cite_N": [
"@cite_30"
],
"mid": [
"2102861042"
],
"abstract": [
"Automatically generating 3D building models from 2D architectural drawings has many useful applications in the architecture engineering and construction community. This survey of model generation from paper and CAD-based architectural drawings covers the common pipeline and compares various algorithms for each step of the process."
]
} |
1710.01918 | 2763439328 | This paper investigates the incentive mechanism design from a novel and practically important perspective in which mobile users as contributors do not join simultaneously and a requester desires large efforts from early contributors. A two-stage Tullock contest framework is constructed:at the second stage the potential contributors compete for splittable reward by exerting efforts, and at the first stage the requester can orchestrate the incentive mechanism to maximize his crowdsensing efficiency given the rewarding budget. A general reward discrimination mechanism is developed for timeliness sensitive crowdsensing where an earlier contributor usually has a larger maximum achievable reward and thus allocates more efforts. Owning to the lack of joining time information, two practical implementations, namely earliest-n and termination time, are announced to the contributors. For each of them, we formulate a Stackelberg Bayesian game in which the joining time of a contributor is his type and not available to his opponents. The uniqueness of Bayesian Nash equilibrium (BNE) is proved in each strategy. To maximize the requester's efficiency, we compute the optimal number of rewarded contributors in the earliest-n scheme and the optimal deadline in the termination time scheme. Our contest framework is applicable not only to the closed crowdsensing with fixed number of contributors, but also to the open crowdsensing that the arrival of contributors is governed by a stochastic process. Extensive simulations manifest that with appropriate reward discriminations, the requester is able to achieve a much higher efficiency with the optimal selection of the number of rewarded contributiors and the termination time. | There is a growing literature on the optimal design of incentive mechanism in crowdsourcing and mobile crowdsensing applications. For instance, @cite_0 @cite_10 targeted at incentivizing high quality user generated content in online question and answer forums. @cite_5 @cite_6 @cite_2 @cite_19 provided entertainment-like or monetary incentives for workers to label tasks in MTurk. The proliferation of mobile handheld devices triggers a variety of crowdsensing technologies. Pioneering sensing systems consist of NoiseTube @cite_20 for noise monitoring, SignalGuru @cite_8 for traffic monitoring, and some others @cite_6 . We categorize the literature on incentive mechanism design into two groups according to their methodologies: one is the auction based approach and the other is the game theoretic approach. | {
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2064783900",
"2124810512",
"",
"2140890285",
"1601808502",
"",
"1583075929"
],
"abstract": [
"",
"With the rich set of embedded sensors installed in smartphones and the large number of mobile users, we witness the emergence of many innovative commercial mobile crowdsensing applications that combine the power of mobile technology with crowdsourcing to deliver time-sensitive and location-dependent information to their customers. Motivated by these real-world applications, we consider the task selection problem for heterogeneous users with different initial locations, movement costs, movement speeds, and reputation levels. Computing the social surplus maximization task allocation turns out to be an NP-hard problem. Hence we focus on the distributed case, and propose an asynchronous and distributed task selection (ADTS) algorithm to help the users plan their task selections on their own. We prove the convergence of the algorithm, and further characterize the computation time for users' updates in the algorithm. Simulation results suggest that the ADTS scheme achieves the highest Jain's fairness index and coverage comparing with several benchmark algorithms, while yielding similar user payoff to a greedy centralized benchmark. Finally, we illustrate how mobile users coordinate under the ADTS scheme based on some practical movement time data derived from Google Maps.",
"In this paper, we provide a simple game-theoretic model of an online question and answer forum. We focus on factual questions in which user responses aggregate while a question remains open. Each user has a unique piece of information and can decide when to report this information. The asker prefers to receive information sooner rather than later, and will stop the process when satisfied with the cumulative value of the posted information. We consider two distinct cases: a complements case, in which each successive piece of information is worth more to the asker than the previous one; and a substitutes case, in which each successive piece of information is worth less than the previous one. A best-answer scoring rule is adopted to model Yahoo! Answers, and is effective for substitutes information, where it isolates an equilibrium in which all users respond in the first round. But we find that this rule is ineffective for complements information, isolating instead an equilibrium in which all users respond in the final round. In addressing this, we demonstrate that an approval-voting scoring rule and a proportional-share scoring rule can enable the most efficient equilibrium with complements information, under certain conditions, by providing incentives for early responders as well as the user who submits the final answer.",
"",
"Crowdsourcing systems, in which tasks are electronically distributed to numerous \"information piece-workers\", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker.",
"Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.",
"",
"In this paper we present a new approach for the assessment of noise pollution involving the general public. The goal of this project is to turn GPS-equipped mobile phones into noise sensors that enable citizens to measure their personal exposure to noise in their everyday environment. Thus each user can contribute by sharing their geo-localised measurements and further personal annotation to produce a collective noise map."
]
} |
1710.01918 | 2763439328 | This paper investigates the incentive mechanism design from a novel and practically important perspective in which mobile users as contributors do not join simultaneously and a requester desires large efforts from early contributors. A two-stage Tullock contest framework is constructed:at the second stage the potential contributors compete for splittable reward by exerting efforts, and at the first stage the requester can orchestrate the incentive mechanism to maximize his crowdsensing efficiency given the rewarding budget. A general reward discrimination mechanism is developed for timeliness sensitive crowdsensing where an earlier contributor usually has a larger maximum achievable reward and thus allocates more efforts. Owning to the lack of joining time information, two practical implementations, namely earliest-n and termination time, are announced to the contributors. For each of them, we formulate a Stackelberg Bayesian game in which the joining time of a contributor is his type and not available to his opponents. The uniqueness of Bayesian Nash equilibrium (BNE) is proved in each strategy. To maximize the requester's efficiency, we compute the optimal number of rewarded contributors in the earliest-n scheme and the optimal deadline in the termination time scheme. Our contest framework is applicable not only to the closed crowdsensing with fixed number of contributors, but also to the open crowdsensing that the arrival of contributors is governed by a stochastic process. Extensive simulations manifest that with appropriate reward discriminations, the requester is able to achieve a much higher efficiency with the optimal selection of the number of rewarded contributiors and the termination time. | DiPalantino and Vojnovic connected crowdsourcing to an all-pay auction model in which users selected among, and subsequently compete in, multiple contests offering various rewards @cite_26 . Singla and Krause @cite_3 presented a near optimal, posted-price mechanism for online budgeted procurement in crowdsourcing platforms. Authors in @cite_5 modeled the interaction between workers and the crowdsourcer as a reverse auction for task labelling under strict budget constraint. designed an auction-based incentive mechanism for mobile phones to collect and analyze data @cite_1 . designed incentive mechanisms based on reverse combinatorial auctions that approximately maximized the social welfare with a guaranteed approximation ratio @cite_28 . studied an incentive mechanism based on all-pay auction with more realistic factors such as information asymmetry, population uncertainty and risk aversion @cite_17 . | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_17"
],
"mid": [
"2124576904",
"",
"1970756365",
"2251049298",
"1601808502",
""
],
"abstract": [
"In this paper we present and analyze a model in which users select among, and subsequently compete in, a collection of contests offering various rewards. The objective is to capture the essential features of a crowdsourcing system, an environment in which diverse tasks are presented to a large community. We aim to demonstrate the precise relationship between incentives and participation in such systems. We model contests as all-pay auctions with incomplete information; as a consequence of revenue equivalence, our model may also be interpreted more broadly as one in which users select among auctions of heterogeneous goods. We present two regimes in which we find an explicit correspondence in equilibrium between the offered rewards and the users' participation levels. The regimes respectively model situations in which different contests require similar or unrelated skills. Principally, we find that rewards yield logarithmically diminishing returns with respect to participation levels. We compare these results to empirical data from the crowdsourcing site Taskcn.com; we find that as we condition the data on more experienced users, the model more closely conforms to the empirical data.",
"",
"Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.",
"What price should be offered to a worker for a task in an online labor market? How can one enable workers to express the amount they desire to receive for the task completion? Designing optimal pricing policies and determining the right monetary incentives is central to maximizing requester's utility and workers' profits. Yet, current crowdsourcing platforms only offer a limited capability to the requester in designing the pricing policies and often rules of thumb are used to price tasks. This limitation could result in inefficient use of the requester's budget or workers becoming disinterested in the task. In this paper, we address these questions and present mechanisms using the approach of regret minimization in online learning. We exploit a link between procurement auctions and multi-armed bandits to design mechanisms that are budget feasible, achieve near-optimal utility for the requester, are incentive compatible (truthful) for workers and make minimal assumptions about the distribution of workers' true costs. Our main contribution is a novel, no-regret posted price mechanism, BP-UCB, for budgeted procurement in stochastic online settings. We prove strong theoretical guarantees about our mechanism, and extensively evaluate it in simulations as well as on real data from the Mechanical Turk platform. Compared to the state of the art, our approach leads to a 180 increase in utility.",
"Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.",
""
]
} |
1710.01918 | 2763439328 | This paper investigates the incentive mechanism design from a novel and practically important perspective in which mobile users as contributors do not join simultaneously and a requester desires large efforts from early contributors. A two-stage Tullock contest framework is constructed:at the second stage the potential contributors compete for splittable reward by exerting efforts, and at the first stage the requester can orchestrate the incentive mechanism to maximize his crowdsensing efficiency given the rewarding budget. A general reward discrimination mechanism is developed for timeliness sensitive crowdsensing where an earlier contributor usually has a larger maximum achievable reward and thus allocates more efforts. Owning to the lack of joining time information, two practical implementations, namely earliest-n and termination time, are announced to the contributors. For each of them, we formulate a Stackelberg Bayesian game in which the joining time of a contributor is his type and not available to his opponents. The uniqueness of Bayesian Nash equilibrium (BNE) is proved in each strategy. To maximize the requester's efficiency, we compute the optimal number of rewarded contributors in the earliest-n scheme and the optimal deadline in the termination time scheme. Our contest framework is applicable not only to the closed crowdsensing with fixed number of contributors, but also to the open crowdsensing that the arrival of contributors is governed by a stochastic process. Extensive simulations manifest that with appropriate reward discriminations, the requester is able to achieve a much higher efficiency with the optimal selection of the number of rewarded contributiors and the termination time. | Authors in @cite_0 studied the question of designing incentives for online Q &A forums. Ghosh and McAfee modeled the economics of incentivizing high-quality user generated content using noncooperative game, and investigated the highest quality at the NE under an elimination mechanism @cite_10 . @cite_1 studied a platform-centric incentive mechanism for maximizing the total effort from contributors in mobile crowdsensing that utilizes a Stackelberg game framework. Authors in @cite_16 presented an optimal Tullock contest model for crowdsensing with incomplete information. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_16",
"@cite_10"
],
"mid": [
"2124810512",
"1970756365",
"1536369969",
""
],
"abstract": [
"In this paper, we provide a simple game-theoretic model of an online question and answer forum. We focus on factual questions in which user responses aggregate while a question remains open. Each user has a unique piece of information and can decide when to report this information. The asker prefers to receive information sooner rather than later, and will stop the process when satisfied with the cumulative value of the posted information. We consider two distinct cases: a complements case, in which each successive piece of information is worth more to the asker than the previous one; and a substitutes case, in which each successive piece of information is worth less than the previous one. A best-answer scoring rule is adopted to model Yahoo! Answers, and is effective for substitutes information, where it isolates an equilibrium in which all users respond in the first round. But we find that this rule is ineffective for complements information, isolating instead an equilibrium in which all users respond in the final round. In addressing this, we demonstrate that an approval-voting scoring rule and a proportional-share scoring rule can enable the most efficient equilibrium with complements information, under certain conditions, by providing incentives for early responders as well as the user who submits the final answer.",
"Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.",
"Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize — which is fixed in conventional Tullock contests — with a prize function that is dependent on the (unknown) winner's contribution, in order to maximize the crowdsourcer's utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players' antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer's utility cum profit and up to nine folds on the players' social welfare.",
""
]
} |
1710.01918 | 2763439328 | This paper investigates the incentive mechanism design from a novel and practically important perspective in which mobile users as contributors do not join simultaneously and a requester desires large efforts from early contributors. A two-stage Tullock contest framework is constructed:at the second stage the potential contributors compete for splittable reward by exerting efforts, and at the first stage the requester can orchestrate the incentive mechanism to maximize his crowdsensing efficiency given the rewarding budget. A general reward discrimination mechanism is developed for timeliness sensitive crowdsensing where an earlier contributor usually has a larger maximum achievable reward and thus allocates more efforts. Owning to the lack of joining time information, two practical implementations, namely earliest-n and termination time, are announced to the contributors. For each of them, we formulate a Stackelberg Bayesian game in which the joining time of a contributor is his type and not available to his opponents. The uniqueness of Bayesian Nash equilibrium (BNE) is proved in each strategy. To maximize the requester's efficiency, we compute the optimal number of rewarded contributors in the earliest-n scheme and the optimal deadline in the termination time scheme. Our contest framework is applicable not only to the closed crowdsensing with fixed number of contributors, but also to the open crowdsensing that the arrival of contributors is governed by a stochastic process. Extensive simulations manifest that with appropriate reward discriminations, the requester is able to achieve a much higher efficiency with the optimal selection of the number of rewarded contributiors and the termination time. | The studies most relevant to ours are @cite_1 @cite_16 that utilized Tullock-like contest models. In the pioneering work @cite_1 , the crowdsensing was modeled by a complete information Tullock game. @cite_16 investigated the same contest model with incomplete information that admited a BNE, and presented an optimal prize function to maximize the total efforts. Both studies do not consider the timeliness of contributions. We rigorously show how an appropriate form of contest model can be chosen, and propose two novel Stackelberg Bayesian contest mechanisms to incentivize early joining contestants to contribute more efforts. | {
"cite_N": [
"@cite_16",
"@cite_1"
],
"mid": [
"1536369969",
"1970756365"
],
"abstract": [
"Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize — which is fixed in conventional Tullock contests — with a prize function that is dependent on the (unknown) winner's contribution, in order to maximize the crowdsourcer's utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players' antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer's utility cum profit and up to nine folds on the players' social welfare.",
"Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms."
]
} |
1710.01840 | 2763068805 | We present hardware, perception, and planning tools that enable a modular robot to autonomously deploy passive structures to reach otherwise-inaccessible regions of the environment to accomplish high-level tasks. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the identified features to expand the reachable space of the robot. A high-level planner reasons about the high-level task, robot locomotion capabilities, and the environment to decide if and where to augment the environment in order to perform the desired task. These autonomous, perception-driven augmentation tools extend the adaptive capabilities of modular robot systems. | present hardware and algorithms for building amorphous ramps in unstructured environments by depositing foam with a tracked mobile robot @cite_1 @cite_6 . Amorphous ramps are built in response to the environment to allow a small mobile robot to surmount large, irregularly shaped obstacles. Our work is similar in spirit, but places an emphasis on autonomy and high-level locomotion and manipulation tasks rather than construction. | {
"cite_N": [
"@cite_1",
"@cite_6"
],
"mid": [
"1989482479",
"2115536039"
],
"abstract": [
"We present a locally reactive algorithm to construct arbitrary shapes with amorphous materials. The goal is to provide methods for robust robotic construction in unstructured, cluttered terrain, where deliberative approaches with pre-fabricated construction elements are difficult to apply. Amorphous materials provide a simple way to interface with existing obstacles, as well as irregularly shaped previous depositions. The local reactive nature of these algorithms allows robots to recover from disturbances, operate in dynamic environments, and provides a way to work with scalable robot teams.",
"We present a model of construction using iterative amorphous depositions and give a distributed algorithm to reliably build ramps in unstructured environments. The relatively simple local strategy for interacting with irregularly shaped, partially built structures gives rise robust adaptive global properties. We illustrate the algorithm in both the single robot and multi-robot case via simulation and describe how to solve key technical challenges to implementing this algorithm via a robotic prototype."
]
} |
1710.01840 | 2763068805 | We present hardware, perception, and planning tools that enable a modular robot to autonomously deploy passive structures to reach otherwise-inaccessible regions of the environment to accomplish high-level tasks. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the identified features to expand the reachable space of the robot. A high-level planner reasons about the high-level task, robot locomotion capabilities, and the environment to decide if and where to augment the environment in order to perform the desired task. These autonomous, perception-driven augmentation tools extend the adaptive capabilities of modular robot systems. | Modular self-reconfigurable robot (MSRR) systems are comprised of simple repeated robot elements (called ) that connect together to form larger robotic structures. These robots can , rearranging their constituent modules to form different morphologies, and changing their abilities to match the needs of the task and environment @cite_10 . Our work leverages recent systems that integrate the low-level capabilities of an MSRR system into a design library, accomplish high-level user-specified tasks by synthesizing library elements into a reactive state machine @cite_3 , and operate autonomously in unknown environments using perception tools for environment exploration and characterization @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_3"
],
"mid": [
"2951669995",
"",
"2774832050"
],
"abstract": [
"The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage, but has never been experimentally demonstrated. For the first time, we present a system that integrates perception, high-level mission planning, and modular robot hardware, allowing a modular robot to autonomously reconfigure in response to an a priori unknown environment in order to complete high-level tasks. Three hardware experiments validate the system, and demonstrate a modular robot autonomously exploring, reconfiguring, and manipulating objects to complete high-level tasks in unknown environments. We present system architecture, software and hardware in a general framework that enables modular robots to solve tasks in unknown environments using autonomous, reactive reconfiguration. The physical robot is composed of modules that support multiple robot configurations. An onboard 3D sensor provides information about the environment and informs exploration, reconfiguration decision making and feedback control. A centralized high-level mission planner uses information from the environment and the user-specified task description to autonomously compose low-level controllers to perform locomotion, reconfiguration, and other behaviors. A novel, centralized self-reconfiguration method is used to change robot configurations as needed.",
"",
"The advantage of modular self-reconfigurable robot systems is their flexibility, but this advantage can only be realized if appropriate configurations (shapes) and behaviors (controlling programs) can be selected for a given task. In this paper, we present an integrated system for addressing high-level tasks with modular robots, and demonstrate that it is capable of accomplishing challenging, multi-part tasks in hardware experiments. The system consists of four tightly integrated components: (1) a high-level mission planner, (2) a large design library spanning a wide set of functionality, (3) a design and simulation tool for populating the library with new configurations and behaviors, and (4) modular robot hardware. This paper builds on earlier work by (in: Robotics: science and systems, 2016), extending the original system to include environmentally adaptive parametric behaviors, which integrate motion planners and feedback controllers with the system."
]
} |
1710.01840 | 2763068805 | We present hardware, perception, and planning tools that enable a modular robot to autonomously deploy passive structures to reach otherwise-inaccessible regions of the environment to accomplish high-level tasks. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the identified features to expand the reachable space of the robot. A high-level planner reasons about the high-level task, robot locomotion capabilities, and the environment to decide if and where to augment the environment in order to perform the desired task. These autonomous, perception-driven augmentation tools extend the adaptive capabilities of modular robot systems. | Our work extends the SMORES-EP hardware system by introducing passive pieces that are manipulated and traversed by the modules. Terada and Murata @cite_7 , present a lattice-style modular system with two parts, structure modules and an assembler robot. Like many lattice-style modular systems, the assembler robot can only move on the structure modules, and not in an unstructured environment. Other lattice-style modular robot systems create structures out of the robots themselves. M-blocks @cite_4 form 3D structures out of robot cubes which rotate over the structure. present rectangular boat robots that self-assemble into floating structures, like a bridge @cite_11 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_11"
],
"mid": [
"",
"1585107097",
"1984426410"
],
"abstract": [
"",
"The main difficulty of construction automation is the variety of materials. To cope with this problem, we propose a concept of automated assembly system for a modular structure. This system uses passive building blocks called \"structure modules\" and an assembler robot that is specialized to handle them. This \"modular\" concept drastically simplifies structural construction and allows automation. This paper describes the hardware design of the structure module and the assembler robots. Especially, the connection mechanism, the essential element of the system, is explained in detail. We built a prototype model based on the design to evaluate its automatic construction capability. Some experimental results illustrate the feasibility of the proposed method.",
"We present the methodology, algorithms, system design, and experiments addressing the self-assembly of large teams of autonomous robotic boats into floating platforms. Identical self-propelled robotic boats autonomously dock together and form connected structures with controllable variable stiffness. These structures can self-reconfigure into arbitrary shapes limited only by the number of rectangular elements assembled in brick-like patterns. An @math complexity algorithm automatically generates assembly plans which maximize opportunities for parallelism while constructing operator-specified target configurations with @math components. The system further features an @math complexity algorithm for the concurrent assignment and planning of trajectories from @math free robots to the growing structure. Such peer-to-peer assembly among modular robots compares favorably to a single active element assembling passive components in terms of both construction rate and potential robustness through redundancy. We describe hardware and software techniques to facilitate reliable docking of elements in the presence of estimation and actuation errors, and we consider how these local variable stiffness connections may be used to control the structural properties of the larger assembly. Assembly experiments validate these ideas in a fleet of 0.5 m long modular robotic boats with onboard thrusters, active connectors, and embedded computers."
]
} |
1710.01840 | 2763068805 | We present hardware, perception, and planning tools that enable a modular robot to autonomously deploy passive structures to reach otherwise-inaccessible regions of the environment to accomplish high-level tasks. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the identified features to expand the reachable space of the robot. A high-level planner reasons about the high-level task, robot locomotion capabilities, and the environment to decide if and where to augment the environment in order to perform the desired task. These autonomous, perception-driven augmentation tools extend the adaptive capabilities of modular robot systems. | @cite_12 present a system in which a mobile robot manipulates specially designed cubes to build functional structures. The robot explores an unknown environment, performing 2D SLAM and visually recognizing blocks and gaps in the ground. Blocks are pushed into gaps to create bridges to previously inaccessible areas. In a real but contrived experimental design'' @cite_12 , a robot is tasked with building a three-block tower, and autonomously uses two blocks to build a bridge to a region with three blocks, retrieving them to complete its task. Where the Magnenat system is limited to manipulating blocks in a specifically designed environment, our work presents hardware, perception, and high-level planning tools that are more general, providing the ability to complete high-level tasks involving locomotion and manipulation in realistic human environments. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2046351921"
],
"abstract": [
"The goal of creating machines that autonomously perform useful work in a safe, robust and intelligent manner continues to motivate robotics research. Achieving this autonomy requires capabilities for understanding the environment, physically interacting with it, predicting the outcomes of actions and reasoning with this knowledge. Such intelligent physical interaction was at the centre of early robotic investigations and remains an open topic."
]
} |
1710.01823 | 2763605854 | This work has been funded in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI 12 RC 2289 (INSIGHT) and by Enterprise Ireland and the IDA under the Technology Centre Programme [Grant TC-2012-009] | @cite_7 have used Dynamic Hierarchical Dirichlet Process to track topics over time, documents can be exchanged however the ordering is intact. They also applied this to longitudinal (NIPS) papers to track emerging and decaying topics (worth noting for tracking changing topics around compliance issues). | {
"cite_N": [
"@cite_7"
],
"mid": [
"2951657789"
],
"abstract": [
"Topic models have proven to be a useful tool for discovering latent structures in document collections. However, most document collections often come as temporal streams and thus several aspects of the latent structure such as the number of topics, the topics' distribution and popularity are time-evolving. Several models exist that model the evolution of some but not all of the above aspects. In this paper we introduce infinite dynamic topic models, iDTM, that can accommodate the evolution of all the aforementioned aspects. Our model assumes that documents are organized into epochs, where the documents within each epoch are exchangeable but the order between the documents is maintained across epochs. iDTM allows for unbounded number of topics: topics can die or be born at any epoch, and the representation of each topic can evolve according to a Markovian dynamics. We use iDTM to analyze the birth and evolution of topics in the NIPS community and evaluated the efficacy of our model on both simulated and real datasets with favorable outcome."
]
} |
1710.01823 | 2763605854 | This work has been funded in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI 12 RC 2289 (INSIGHT) and by Enterprise Ireland and the IDA under the Technology Centre Programme [Grant TC-2012-009] | Most work on taxonomy generation in the legal domain has involved manual construction of concept hierarchies by legal experts @cite_2 . This task, besides being both tedious and costly in terms of time and qualified human resources, is also not easily adaptable to changes. Systems for automatic legal-domain taxonomy creation have on the contrary received very low attention so far. Only @cite_4 worked on a similar task, and developed a machine learning-based system for scalable document classification. They constructed a hierarchical topic schemes of areas of laws and used proprietary methods of scoring and ranking to classify documents. However, this work has been deposited as a patent and is not freely available. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2098269849",
"232172822"
],
"abstract": [
"An economic, scalable machine learning system and process perform document (concept) classification with high accuracy using large topic schemes, including large hierarchical topic schemes. One or more highly relevant classification topics is suggested for a-given document (concept) to be classified. The invention includes training and concept classification processes. The invention also provides methods that may be used as part of the training and or concept classification processes, including: a method of scoring the relevance of features in training concepts, a method of ranking concepts based on relevance score, and a method of voting on topics associated with an input concept. In a preferred embodiment, the invention is applied to the legal (case law) domain, classifying legal concepts (rules of law) according to a proprietary legal topic classification scheme (a hierarchical scheme of areas of law).",
"Document management is not often handled appropriately by organisations, if at all. Despite that, and despite the lack of structure in documents, organisations must face regulations that require owning a document collection with semantic content. The technique based on taxonomies and folksonomies can easily produce an adequate semantic classification for documents. It requires an adequate setup among the domain experts that apply it. The approach we propose uses Lean Kanban to coordinate the phases of definition, validation and implementation of taxonomies and folksonomies. It helps organisations to create a semantic classification of existing document resources, making them ready to be used in ways that were not possible before. At the same time, it helps to improve the quality of work of the organisation itself, adding speed to document search."
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | A considerable amount of work exists in the area of finding feasible grasping points on novel objects using vision based systems. Initial efforts using 2D images to find grasping points in presented in @cite_9 . The arrival of economic RGB-D cameras allowed interpreting objects in 3D, which led to object identification as point clouds and derived parameters. Different models have been utilized for this task, such as representing objects by implicit polynomial and algebric invariants @cite_14 , spherical harmonics @cite_3 @cite_8 , geons @cite_20 , generalized cylinders @cite_32 , symmetry-seeking models @cite_15 mgp_approx_symm_sig_06 , blob models @cite_28 , union of balls @cite_11 and hyperquadrics @cite_21 , to name a few. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_15",
"@cite_20",
"@cite_11"
],
"mid": [
"2156463490",
"",
"",
"2041376653",
"2131374595",
"2000500206",
"1989357090",
"2107198582",
"122304603",
"2018613909"
],
"abstract": [
"We treat the use of more complex higher degree polynomial curves and surfaces of degree higher than 2, which have many desirable properties for object recognition and position estimation, and attack the instability problem arising in their use with partial and noisy data. The scenario discussed in this paper is one where we have a set of objects that are modeled as implicit polynomial functions, or a set of representations of classes of objects with each object in a class modeled as an implicit polynomial function, stored in the database. Then, given partial data from one of the objects, we want to recognize the object (or the object class) or collect more data in order to get better parameter estimates for more reliable recognition. Two problems arising in this scenario are discussed: 1) the problem of recognizing these polynomials by comparing them in terms of their coefficients; and 2) the problem of where to collect data so as to improve the parameter estimates as quickly as possible. We use an asymptotic Bayesian approximation for solving the two problems. The intrinsic dimensionality of polynomials and the use of the Mahalanobis distance are discussed.",
"",
"",
"We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers.",
"The shape representation and modeling based on implicit functions have received considerable attention in computer vision literature. In this paper, we propose extended hyperquadrics, as a generalization of hyperquadrics developed by Hanson, for modeling global geometric shapes. The extended hyperquadrics can strengthen the representation power of hyperquadrics, especially for the object with concavities. We discuss the distance measures between extended hyperquadric surfaces and given data set and their minimization to obtain the optimum model parameters. We present several experimental results for fitting extended hyperquadrics to 3D real and synthetic data. We demonstrate that extended hyperquadrics can model more complex shapes than hyperquadrics, maintaining many desirable properties of hyperquadrics.",
"Generalized cylinder (GC) is a class of parametric shapes that is very flexible and capable of modeling many different types of real-world objects, and have subsequently been the focus of considerable research in the vision community. Most of the related works proposed previously have dealt with the recovery of 3D shape description of objects based on the GC representation from one or more 2D image data. Different from the objective of the previous works, in this paper, we will propose a new approach to obtain a GC-based shape description of 3D objects. The proposed approach of deriving the GC axis is a further extension of the potential-based skeletonization approach presented in (IEEE Trans. Pattern Anal. Mach. Intell. 22(11) (2000) 1241). Simulation results demonstrate that the derived GC representation will yield better approximation of object shape than that based on simpler subclasses of GC since there is, in principle, no restriction on the topology of the GC axis and the shape of the cross-sections.",
"In this paper, we present a new and efficient spherical harmonics decomposition for spherical functions defining 3D triangulated objects. Such spherical functions are intrinsically associated to star-shaped objects. However, our results can be extended to any triangular object after segmentation into star-shaped surface patches and recomposition of the results in the implicit framework. There is thus no restriction about the genus number of the object. We demonstrate that the evaluation of the spherical harmonics coefficients can be performed by a Monte Carlo integration over the edges, which makes the computation more accurate and faster than previous techniques, and provides a better control over the precision error in contrast to the voxel-based methods. We present several applications of our research, including fast spectral surface reconstruction from point clouds, local surface smoothing and interactive geometric texture transfer.",
"We propose models of 3D shape which may be viewed as deformable bodies composed of simulated elastic material. In contrast to traditional, purely geometric models of shape, deformable models are active—their shapes change in response to externally applied forces. We develop a deformable model for 3D shape which has a preference for axial symmetry. Symmetry is represented even though the model does not belong to a parametric shape family such as (generalized) cylinders. Rather, a symmetry-seeking property is designed into internal forces that constrain the deformations of the model. We develop a framework for 3D object reconstruction based on symmetry-seeking models. Instances of these models are formed from monocular image data through the action of external forces derived from the data. The forces proposed in this paper deform the model in space so that the shape of its projection into the image plane is consistent with the 2D silhouette of an object of interest. The effectiveness of our approach is demonstrated using natural images.",
"",
"Typical tasks of future service robots involve grasping and manipulating a large variety of objects differing in size and shape. Generating stable grasps on 3D objects is considered to be a hard problem, since many parameters such as hand kinematics, object geometry, material properties and forces have to be taken into account. This results in a high-dimensional space of possible grasps that cannot be searched exhaustively. We believe that the key to find stable grasps in an efficient manner is to use a special representation of the object geometry that can be easily analyzed. In this paper, we present a novel grasp planning method that evaluates local symmetry properties of objects to generate only candidate grasps that are likely to be of good quality. We achieve this by computing the medial axis which represents a 3D object as a union of balls. We analyze the symmetry information contained in the medial axis and use a set of heuristics to generate geometrically and kinematically reasonable candidate grasps. These candidate grasps are tested for force-closure. We present the algorithm and show experimental results on various object models using an anthropomorphic hand of a humanoid robot in simulation."
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | The representation of objects as SQ is found in @cite_22 and later in @cite_2 . The work in @cite_6 shows fast object representation with SQ for household environments. An approach infast and efficient pose recovery of objects using SQ has been presented in @cite_30 . In the work of @cite_25 the authors try to fit SQ to piled objects. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_6",
"@cite_2",
"@cite_25"
],
"mid": [
"2024577416",
"2023751161",
"2118334589",
"159399829",
"1535992759"
],
"abstract": [
"Rapidly acquiring the shape and pose information of unknown objects is an essential characteristic of modern robotic systems in order to perform efficient manipulation tasks. In this work, we present a framework for 3D geometric shape recovery and pose estimation from unorganized point cloud data. We propose a low latency multi-scale voxelization strategy that rapidly fits superquadrics to single view 3D point clouds. As a result, we are able to quickly and accurately estimate the shape and pose parameters of relevant objects in a scene. We evaluate our approach on two datasets of common household objects collected using Microsoft's Kinect sensor. We also compare our work to the state of the art and achieve comparable results in less computational time. Our experimental results demonstrate the efficacy of our approach.",
"A new and powerful family of parametric shapes extends the basic quadric surfaces and solids, yielding a variety of useful forms.",
"Fast detection of objects in a home or office environment is relevant for robotic service and assistance applications. In this work we present the automatic localization of a wide variety of differently shaped objects scanned with a laser range sensor from one view in a cluttered setting. The daily-life objects are modeled using approximated superquadrics, which can be obtained from showing the object or another modeling process. Detection is based on a hierarchical RANSAC search to obtain fast detection results and the voting of sorted quality-of-fit criteria. The probabilistic search starts from low resolution and refines hypotheses at increasingly higher resolution levels. Criteria for object shape and the relationship of object parts together with a ranking procedure and a ranked voting process result in a combined ranking of hypothesis using a minimum number of parameters. Experiments from cluttered table top scenes demonstrate the effectiveness and robustness of the approach, feasible for real world object localization and robot grasp planning.",
"",
"Fast robotic unloading of piled deformable box-like objects (e.g. box-like sacks), is undoubtedly of great importance to the industry. Existing systems although fast, can only deal with layered, neatly placed configurations of such objects. In this paper we discuss an approach which deals with both neatly placed and jumbled configurations of objects. We use a time of flight laser sensor mounted on the hand of a robot for data acquisition. Target objects are modeled with globally deformed superquadrics. Object vertices are detected and superquadric seeds are placed at these vertices. Seed refinement via region growing results in accurate object recovery. Our system exhibits a plethora of advantages the most important of which its speed. Experiments demonstrate that our system can be used for object unloading in real time, when a multi-processor computer is employed."
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | Recent literature suggests that using a single-view point cloud to fit (SQ) can lead to erroneous shape and pose estimation. In general, the 3D sensors are noisy and obtain only a partial view of the object, from a single viewpoint. In order to obtain a full model of the object, several strategies have been developed. In @cite_31 , the shape of the object is completed from the partial view and a mesh of the completed model is generated for grasping. Generating a full model from a partial model can be approached in several ways, mainly symmetry detection @cite_4 and symmetry plane @cite_12 @cite_23 @cite_18 . Extrusion-based object completion has also been proposed in @cite_10 . @cite_17 presents a different strategy of changing the viewpoint in a controlled manner and registering all the partial views to create a complete model. This technique provides good results, yet it is not very suitable for real time systems and is prone to errors due to registration of several views. In addition, results are not satisfactory when the working environment becomes densely cluttered. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2156583822",
"2060206980",
"2109384047",
"1572975105",
"2083624211",
"2097307110",
""
],
"abstract": [
"We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned.",
"\"Symmetry is a complexity-reducing concept [...]; seek it every-where.\" - Alan J. PerlisMany natural and man-made objects exhibit significant symmetries or contain repeated substructures. This paper presents a new algorithm that processes geometric models and efficiently discovers and extracts a compact representation of their Euclidean symmetries. These symmetries can be partial, approximate, or both. The method is based on matching simple local shape signatures in pairs and using these matches to accumulate evidence for symmetries in an appropriate transformation space. A clustering stage extracts potential significant symmetries of the object, followed by a verification step. Based on a statistical sampling analysis, we provide theoretical guarantees on the success rate of our algorithm. The extracted symmetry graph representation captures important high-level information about the structure of a geometric model which in turn enables a large set of further processing operations, including shape compression, segmentation, consistent editing, symmetrization, indexing for retrieval, etc.",
"In this paper we present a method for building models for grasping from a single 3D snapshot of a scene composed of objects of daily use in human living environments. We employ fast shape estimation, probabilistic model fitting and verification methods capable of dealing with different kinds of symmetries, and combine these with a triangular mesh of the parts that have no other representation to model previously unseen objects of arbitrary shape. Our approach is enhanced by the information given by the geometric clues about different parts of objects which serve as prior information for the selection of the appropriate reconstruction method. While we designed our system for grasping based on single view 3D data, its generality allows us to also use the combination of multiple views. We present two application scenarios that require complete geometric models: grasp planning and locating objects in camera images.",
"In this paper we present an approach for creating complete shape representations from a single depth image for robot grasping. We introduce algorithms for completing partial point clouds based on the analysis of symmetry and extrusion patterns in observed shapes. Identified patterns are used to generate a complete mesh of the object, which is, in turn, used for grasp planning. The approach allows robots to predict the shape of objects and include invisible regions into the grasp planning step. We show that the identification of shape patterns, such as extrusions, can be used for fast generation and optimization of grasps. Finally, we present experiments performed with our humanoid robot executing pick-up tasks based on single depth images and discuss the applications and shortcomings of our approach.",
"In this paper, we propose modelling objects using extrusion-based representations, which can be used to complete partial point clouds. These extrusion-based representations are particularly well-suited for modelling basic household objects that robots will often need to manipulate. In order to efficiently complete a partial point cloud, we first detect planar reflection symmetries. These symmetries are then used to determine initial candidates for extruded shapes in the point clouds. These candidate solutions are then used to locally search for a suitable set of parameters to complete the point cloud. The proposed method was tested on real data of household objects and it successfully detected the extruded shapes of the objects. By using the extrusion-based representation, the system could accurately capture various details of the objects' shapes.",
"We describe a technique for reconstructing probable occluded surfaces from 3D range images. The technique exploits the fact that many objects possess shape symmetries that can be recognized even from partial 3D views. Our approach identifies probable symmetries and uses them to attend the partial 3D shape model into the occluded space. To accommodate objects consisting of multiple parts, we describe a technique for segmenting objects into parts characterized by different symmetries. Results are provided for a real-world database of 3D range images of common objects, acquired through an active stereo rig",
""
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | The calculation of feasible grasping points on a point cloud or a mesh is by nature iterative, hence computationally expensive. In @cite_24 , a large set of grasps is generated directly from the point cloud and evaluated using convolutional neural networks, obtaining good grasp success results. Such methods avoid the need for robust segmentation but cannot assure the assignment of the grasp to a target object. The method, denoted Grasp Pose Detection (GPD), has been combined with object pose detection in @cite_0 . A similar approach uses Height Accumulated Features (HAF) @cite_1 , where local topographical information from the point cloud is retrieved to calculate antipodal grasps. | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_0"
],
"mid": [
"2290564286",
"1899217968",
""
],
"abstract": [
"This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work.",
"We present a system for grasping unknown objects, even from piles or cluttered scenes, given a point cloud. Our method is based on the topography of a given scene and abstracts grasp-relevant structures to enable machine learning techniques for grasping tasks. We describe how Height Accumulated Features HAF and their extension, Symmetry Height Accumulated Features, extract grasp relevant local shapes. We investigate grasp quality using an F-score metric. We demonstrate the gain and the expressive power of HAF by comparing its trained classifier with one that resulted from training on simple height grids. An efficient way to calculate HAF is presented. We describe how the trained grasp classifier is used to explore the whole grasp space and introduce a heuristic to find the most robust grasp. We show how to use our approach to adapt the gripper opening width before grasping. In robotic experiments we demonstrate different aspects of our system on three robot platforms: a Schunk seven-degree-of-freedom arm, a PR2 and a Kuka LWR arm. We perform tasks to grasp single objects, autonomously unload a box and clear the table. Thereby we show that our approach is easily adaptable and robust with respect to different manipulators. As part of the experiments we compare our algorithm with a state-of-the-art method and show significant improvements. Concrete examples are used to illustrate the benefit of our approach compared with established grasp approaches. Finally, we show advantages of the symbiosis between our approach and object recognition.",
""
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | A different set of methods use the fitting of object models to the point clouds to generate smaller or simpler sets of grasp points. In @cite_7 , the calculation of grasp regions is combined with path planning. Curvature-based grasping using antipodal points for differentiable curves, in both convex and concave segments, was studied in @cite_29 , while in @cite_26 , a grasping energy function is used to calculate antipodal grasping points using local modelling of the surface. | {
"cite_N": [
"@cite_29",
"@cite_26",
"@cite_7"
],
"mid": [
"2125387318",
"2132476906",
"2023106861"
],
"abstract": [
"It is well known that antipodal grasps can be achieved on curved objects in the presence of friction. This paper presents an efficient algorithm that finds, up to numerical resolution, all pairs of antipodal points on a closed, simple, and twice continuously differentiable plane curve. Dissecting the curve into segments everywhere convex or everywhere concave, the algorithm marches simultaneously on a pair of such segments with provable convergence and interleaves marching with numerical bisection. It makes use of new insights into the differential geometry at two antipodal points. We have avoided resorting to traditional nonlinear programming which would neither be quite as efficient nor guarantee to find all antipodal points. Dissection and the coupling of marching with bisection introduced in this paper are potentially applicable to many optimization problems involving curves and curved shapes.",
"Two-finger antipodal point grasping of arbitrarily shaped smooth 2-D and 3-D objects is considered. An object function is introduced that maps a finger contact space to the object surface. Conditions are developed to identify the feasible grasping region, F, in the finger contact space. A \"grasping energy function\", E, is introduced which is proportional to the distance between two grasping points. The antipodal points correspond to critical points of E in F. Optimization and or continuation techniques are used to find these critical points. In particular, global optimization techniques are applied to find the \"maximal\" or \"minimal\" grasp. Further, modeling techniques are introduced for representing 2-D and 3-D objects using B-spline curves and spherical product surfaces. >",
"Traditionally, grasp and arm motion planning are considered as separate tasks. This paper presents an integrated approach that only requires the initial configuration of the robotic arm and the pose of the target object to simultaneously plan a good hand pose and arm trajectory to grasp the object. The planner exploits the concept of independent contact regions to look for the best possible grasp. The goal poses for the end effector are obtained using two different methods: one that biases a sampling approach towards favorable regions using principal component analysis, and another one that considers the capabilities of the robotic arm to decide the most promising hand poses. The proposed method is evaluated using different scenarios for the humanoid robot Spacejustin."
]
} |
1710.02121 | 2761820942 | In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy. | A third set of methods relies on the recognition of objects and comparison to a database of objects with optimized grasping points already included as features of the object @cite_18 @cite_19 . | {
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"1510186039",
"2156583822"
],
"abstract": [
"A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided.",
"We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned."
]
} |
1710.02081 | 2762644402 | Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime. | If the exposure of the camera can be controlled manually, the photometric calibration can be obtained by acquiring multiple images taken under different exposures @cite_12 @cite_14 and then estimating for a vignetting map by taking images of a uniformly colored surface @cite_16 @cite_5 . However, for many video cameras the exposure times are automatically chosen and cannot be influenced by the user. Furthermore, one might want to run a visual odometry or SLAM algorithm on datasets where no photometric calibration is provided and no access to the camera is given. In these cases, it is necessary to use an algorithm that can provide calibrations for arbitrary video sequences. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_12",
"@cite_16"
],
"mid": [
"",
"2130700878",
"2069281566",
"2464674920"
],
"abstract": [
"",
"A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.",
"We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.",
"We present a dataset for evaluating the tracking accuracy of monocular visual odometry and SLAM methods. It contains 50 real-world sequences comprising more than 100 minutes of video, recorded across dozens of different environments -- ranging from narrow indoor corridors to wide outdoor scenes. All sequences contain mostly exploring camera motion, starting and ending at the same position. This allows to evaluate tracking accuracy via the accumulated drift from start to end, without requiring ground truth for the full sequence. In contrast to existing datasets, all sequences are photometrically calibrated. We provide exposure times for each frame as reported by the sensor, the camera response function, and dense lens attenuation factors. We also propose a novel, simple approach to non-parametric vignette calibration, which requires minimal set-up and is easy to reproduce. Finally, we thoroughly evaluate two existing methods (ORB-SLAM and DSO) on the dataset, including an analysis of the effect of image resolution, camera field of view, and the camera motion direction."
]
} |
1710.02081 | 2762644402 | Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime. | Multiple image approaches focus on offline applications such as panorama stitching of only a few input images @cite_8 @cite_15 for which the runtime of the algorithm is not critical and a large number of pixel correspondences can be acquired easily by aligning image pairs. Those approaches are not well suited for providing an online calibration of videos that can exhibit arbitrary motion. Nevertheless, we can adapt their underlying optimization strategies for the photometric parameter optimization. | {
"cite_N": [
"@cite_15",
"@cite_8"
],
"mid": [
"2133132458",
"2123315723"
],
"abstract": [
"Nonuniform exposures often affect imaging systems, e.g., owing to vignetting. Moreover, the sensor’s radiometric response may be nonlinear. These characteristics hinder photometric measurements. They are particularly annoying in image mosaicking, in which images are stitched to enhance the field of view. Mosaics suffer from seams stemming from radiometric inconsistencies between raw images. Prior methods feathered the seams but did not address their root cause. We handle these problems in a unified framework. We suggest a method for simultaneously estimating the radiometric response and the camera nonuniformity, based on a frame sequence acquired during camera motion. The estimated functions are then compensated for. This permits image mosaicking, in which no seams are apparent. There is no need to resort to dedicated seam-feathering methods. Fundamental ambiguities associated with this estimation problem are stated.",
"In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data, which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics that are more representative of the scene than normal mosaics."
]
} |
1710.02081 | 2762644402 | Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime. | Our algorithm builds on the work of @cite_1 , applying their nonlinear estimation formulation to arbitrary video sequences using gain robust feature tracking, recovering response function, vignetting, exposure times and radiances of the tracked scene points. We track features with large radial motion across multiple frames in order to recover the vignetting reliably. In the case of vignetted video, we do not require any exposure change to calibrate for the parameters, in contrast to methods which only estimate for a response function. We verify the effectiveness and accuracy of our algorithm by recovering the photometric parameters of the TUM Mono VO dataset @cite_16 where full calibration ground truth is available as well as on manually disturbed artificial sequences of the ICL-NUIM dataset @cite_6 . Furthermore, we show that using our algorithm in parallel to a visual odometry or visual SLAM method can significantly enhance its performance when running on datasets with photometric disturbances. Our method can also be used to improve the results of other methods in computer vision that rely on the brightness constancy assumption, such as for example many implementations for the optical flow problem @cite_4 . | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_6",
"@cite_4"
],
"mid": [
"2464674920",
"2139851675",
"2058535340",
"2118877769"
],
"abstract": [
"We present a dataset for evaluating the tracking accuracy of monocular visual odometry and SLAM methods. It contains 50 real-world sequences comprising more than 100 minutes of video, recorded across dozens of different environments -- ranging from narrow indoor corridors to wide outdoor scenes. All sequences contain mostly exploring camera motion, starting and ending at the same position. This allows to evaluate tracking accuracy via the accumulated drift from start to end, without requiring ground truth for the full sequence. In contrast to existing datasets, all sequences are photometrically calibrated. We provide exposure times for each frame as reported by the sensor, the camera response function, and dense lens attenuation factors. We also propose a novel, simple approach to non-parametric vignette calibration, which requires minimal set-up and is easy to reproduce. Finally, we thoroughly evaluate two existing methods (ORB-SLAM and DSO) on the dataset, including an analysis of the effect of image resolution, camera field of view, and the camera motion direction.",
"We discuss calibration and removal of “vignetting” (radial falloff) and exposure (gain) variations from sequences of images. Even when the response curve is known, spatially varying ambiguities prevent us from recovering the vignetting, exposure, and scene radiances uniquely. However, the vignetting and exposure variations can nonetheless be removed from the images without resolving these ambiguities or the previously known scale and gamma ambiguities. Applications include panoramic image mosaics, photometry for material reconstruction, image-based rendering, and preprocessing for correlation-based vision algorithms.",
"We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available.",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system."
]
} |
1710.01417 | 2762362435 | Robots are required to execute increasingly complex instructions in dynamic environments, which can lead to a disconnect between the user's intent and the robot's representation of the instructions. In this paper we present a natural language instruction grounding framework which uses formal synthesis to enable the robot to identify necessary environment assumptions for the task to be successful. These assumptions are then expressed via natural language questions referencing objects in the environment. The user is prompted to confirm or reject the assumption. We demonstrate our approach on two tabletop pick-and-place tasks. | Existing work that leveraged formal synthesis to generate verbal feedback @cite_21 relied on manually-defined groundings for actions, whereas our model learns these symbols. Other work in formally representing robot instructions uses Combinatorial Categorical Grammars to infer logical representations of navigation instructions @cite_11 . | {
"cite_N": [
"@cite_21",
"@cite_11"
],
"mid": [
"2295357975",
"46490633"
],
"abstract": [
"Abstract : This paper addresses the challenge of enabling non-expert users to command robots to perform complex highleveltasks using natural language. It describes an integrated system that combines the power of formalmethods with the accessibility of natural language, providing correct-by-construction controllers for high-levelspecifications that can be implemented, and easy-to-understand feedback to the user on those that cannot be achieved.This is among the first works to close this feedback loop, enabling users to interact with the robot in order to identifya succinct cause of failure and obtain the desired controller. The supported language and logical capabilities areillustrated using examples involving a robot assistant in a hospital.",
"As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment."
]
} |
1710.01457 | 2951176974 | An intuition on human segmentation is that when a human is moving in a video, the video-context (e.g., appearance and motion clues) may potentially infer reasonable mask information for the whole human body. Inspired by this, based on popular deep convolutional neural networks (CNN), we explore a very-weakly supervised learning framework for human segmentation task, where only an imperfect human detector is available along with massive weakly-labeled YouTube videos. In our solution, the video-context guided human mask inference and CNN based segmentation network learning iterate to mutually enhance each other until no further improvement gains. In the first step, each video is decomposed into supervoxels by the unsupervised video segmentation. The superpixels within the supervoxels are then classified as human or non-human by graph optimization with unary energies from the imperfect human detection results and the predicted confidence maps by the CNN trained in the previous iteration. In the second step, the video-context derived human masks are used as direct labels to train CNN. Extensive experiments on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate that the proposed framework has already achieved superior results than all previous weakly-supervised methods with object class or bounding box annotations. In addition, by augmenting with the annotated masks from PASCAL VOC 2012, our method reaches a new state-of-the-art performance on the human segmentation task. | Unsupervised video segmentation focused on extracting coherent groups of supervoxels by considering the appearance and temporal consistency. These methods tend to over-segment an object into multiple parts and provide a mid-level space-time grouping, which cannot be directly used for object segmentation. Recent approaches proposed to upgrade the supervoxels to object-level segments @cite_30 @cite_23 . Their performance is often limited by the incorrect segment masks. To minimize human efforts, some image-based attempts @cite_12 @cite_11 @cite_35 @cite_32 @cite_16 @cite_29 have been devoted to learning reliable models with very few labeled data for object detection. Among these methods, the semantic relationships @cite_12 were further used to provide more constraints on selecting instances. In addition, the video-based approaches @cite_22 @cite_15 @cite_4 @cite_9 utilized motion cues and appearance correlations within video frames to augment the model training. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_29",
"@cite_9",
"@cite_32",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2105297725",
"2081613070",
"1927251054",
"1973054923",
"",
"",
"2160160833",
"589665618",
"219040644",
"1908985308",
"1964763677",
"88868203"
],
"abstract": [
"The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.",
"Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision - image level tags, bounding boxes, and partial labels - to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12 on per-class accuracy, while maintaining comparable per-pixel accuracy.",
"Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.",
"",
"",
"We propose a method to expand the visual coverage of training sets that consist of a small number of labeled examples using learned attributes. Our optimization formulation discovers category specific attributes as well as the images that have high confidence in terms of the attributes. In addition, we propose a method to stably capture example-specific attributes for a small sized training set. Our method adds images to a category from a large unlabeled image pool, and leads to significant improvement in category recognition accuracy evaluated on a large-scale dataset, Image Net.",
"A major challenge in video segmentation is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (super)pixels in space and time, they offer only a myopic view of consistency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annotation for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.",
"We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.",
"We consider the problem of semi-supervised bootstrap learning for scene categorization. Existing semi-supervised approaches are typically unreliable and face semantic drift because the learning task is under-constrained. This is primarily because they ignore the strong interactions that often exist between scene categories, such as the common attributes shared across categories as well as the attributes which make one scene different from another. The goal of this paper is to exploit these relationships and constrain the semi-supervised learning problem. For example, the knowledge that an image is an auditorium can improve labeling of amphitheaters by enforcing constraint that an amphitheater image should have more circular structures than an auditorium image. We propose constraints based on mutual exclusion, binary attributes and comparative attributes and show that they help us to constrain the learning problem and avoid semantic drift. We demonstrate the effectiveness of our approach through extensive experiments, including results on a very large dataset of one million images."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | An example of work studying the impact of various image-quality covariates on the performance of several deep CNN models was presented by Dodge and Karam in @cite_8 . Here, the authors explored the influence of noise, blur, contrast, and JPEG compression on the performance of four deep neural network models applied to the general image classification task. The authors concluded that noise and blur are the most detrimental factors. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2337024056"
],
"abstract": [
"Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | @cite_40 compared traditional machine learning models and deep learning models on equal footing by using the same data augmentation and preprocessing techniques that are commonly used with convolutional neural networks on traditional machine learning models. The authors also explored the importance of color information, but focused on the impact of color on traditional models rather that on its role in deep learning. The main finding of this work was that deep learning models have an edge over traditional machine learning models. However, data augmentation, color information, and other preprocessing tasks were found to be important, as these approaches also helped to improve the performance of traditional machine learning models. | {
"cite_N": [
"@cite_40"
],
"mid": [
"1994002998"
],
"abstract": [
"The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | An alternative view on covariate analyses involving deep models was recently presented by Richard- in @cite_34 . In this work the authors compare and evaluate several deep convolutional neural network architectures from the perspective of visual psychophysics. In the context of the object recognition task, they use procedurally rendered images of 3-D models of objects corresponding to the ImageNet object classes to determine the canonical views'' learned by deep convolutional neural networks and determine the networks' performance when viewing the objects from different angles and distances or when the images are subjected to deformations such as random linear occlusion of the object bounding box, Gaussian blur, and brightness changes. The main point made by the authors is that model comparison must be conducted under variations of the input data, or in other words, the analysis of the robustness of the models should be used as a methodological tool for model comparison. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2552164092"
],
"abstract": [
"By providing substantial amounts of data and standardized evaluation protocols, datasets in computer vision have helped fuel advances across all areas of visual recognition. But even in light of breakthrough results on recent benchmarks, it is still fair to ask if our recognition algorithms are doing as well as we think they are. The vision sciences at large make use of a very different evaluation regime known as Visual Psychophysics to study visual perception. Psychophysics is the quantitative examination of the relationships between controlled stimuli and the behavioral responses they elicit in experimental test subjects. Instead of using summary statistics to gauge performance, psychophysics directs us to construct item-response curves made up of individual stimulus responses to find perceptual thresholds, thus allowing one to identify the exact point at which a subject can no longer reliably recognize the stimulus class. In this article, we introduce a comprehensive evaluation framework for visual recognition models that is underpinned by this methodology. Over millions of procedurally rendered 3D scenes and 2D images, we compare the performance of well-known convolutional neural networks. Our results bring into question recent claims of human-like performance, and provide a path forward for correcting newly surfaced algorithmic deficiencies."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | Our work builds on the preliminary results reported in @cite_25 and @cite_0 and extends our previous results to face verification experiments on the LFW dataset and a wide range of image-quality and model-related covariates. The analysis includes a larger number of deep CNN models and is significantly more comprehensive in terms of amount of analyzed factors. | {
"cite_N": [
"@cite_0",
"@cite_25"
],
"mid": [
"2963958000",
"2511484725"
],
"abstract": [
"Deep learning based approaches have been dominating the face recognition field due to the significant performance improvement they have provided on the challenging wild datasets. These approaches have been extensively tested on such unconstrained datasets, on the Labeled Faces in the Wild and YouTube Faces, to name a few. However, their capability to handle individual appearance variations caused by factors such as head pose, illumination, occlusion, and misalignment has not been thoroughly assessed till now. In this paper, we present a comprehensive study to evaluate the performance of deep learning based face representation under several conditions including the varying head pose angles, upper and lower face occlusion, changing illumination of different strengths, and misalignment due to erroneous facial feature localization. Two successful and publicly available deep learning models, namely VGG-Face and Lightened CNN have been utilized to extract face representations. The obtained results show that although deep learning provides a powerful representation for face recognition, it can still benefit from preprocessing, for example, for pose and illumination normalization in order to achieve better performance under various conditions. Particularly, if these variations are not included in the dataset used to train the deep learning model, the role of preprocessing becomes more crucial. Experimental results also show that deep learning based representation is robust to misalignment and can tolerate facial feature localization errors up to 10 of the interocular distance.",
"Face recognition approaches that are based on deep convolutional neural networks (CNN) have been dominating the field. The performance improvements they have provided in the so called in-the-wild datasets are significant, however, their performance under image quality degradations have not been assessed, yet. This is particularly important, since in real-world face recognition applications, images may contain various kinds of degradations due to motion blur, noise, compression artifacts, color distortions, and occlusion. In this work, we have addressed this problem and analyzed the influence of these image degradations on the performance of deep CNN-based face recognition approaches using the standard LFW closed-set identification protocol. We have evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and GoogLeNet. Results have indicated that blur, noise, and occlusion cause a significant decrease in performance, while deep CNN models are found to be robust to distortions, such as color distortions and change in color balance."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | describe research belonging to the group of model-analysis work in @cite_42 . Here, the authors present an evaluation of the performance of their convolutional neural network in the presence of image transformations and deformations in the context of unsupervised image representation learning. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2148349024"
],
"abstract": [
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | They conclude that combining several sources of image transformations can allow convolutional neural networks to better learn general image representations in an unsupervised manner. Similar to this work, we study in this paper the effects of image deformations on the learned image representations. However different from @cite_42 , we assess several convolutional neural networks trained in a supervised manner. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2148349024"
],
"abstract": [
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | Another work from this group was presented by Zeiler and Fergus in @cite_12 . Here, the authors studied the effects of image covariates including rotation, translation, and scale in the context of interpreting and understanding the internal representations produced by deep convolutional neural networks trained on the ImageNet object classification task. In their experiments, the invariance of their convolutional neural network to the studied covariates was found to increase significantly with network depth. They also found the deep neural network features to increase in discriminative power with network depth in the context of transfer learning. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2952186574"
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1710.01494 | 2755255259 | Convolutional neural network (CNN) based approaches are the state of the art in various computer vision tasks including face recognition. Considerable research effort is currently being directed toward further improving CNNs by focusing on model architectures and training techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent CNN models using the Labelled Faces in the Wild dataset. Specifically, we investigate the influence of covariates related to image quality and model characteristics, and analyse their impact on the face verification performance of different deep CNN models. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artefacts is limited. We find that the descriptor-computation strategy and colour information does not have a significant influence on performance. | More recently, Lenc and Vedaldi in @cite_16 evaluates how well the properties of equivariance, invariance, and equivalence are preserved in the presence of image transformations by various image representation models including deep convolutional neural networks. The transformations studied include rotation, mirroring, and affine transformations of the input images. Amongst their findings, representations based on deep convolutional neural networks were found to be better than other studied representations at learning either invariance or equivariance to the studied transformations based on training objectives. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2949074271"
],
"abstract": [
"Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too."
]
} |
1710.01453 | 2545239134 | Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training. | Most works in sketch portrait generation focus on two kinds of sketches, namely profile sketches @cite_17 and shading sketches @cite_18 . Compared with the former, the shading sketches can not only use lines to reflect the overall profiles, but also capture the textural parts via shading. Thus, the shading sketches are more challenge to be modeled. We mainly study the automatic generation of shading sketches in this paper. | {
"cite_N": [
"@cite_18",
"@cite_17"
],
"mid": [
"2149481809",
"2115592720"
],
"abstract": [
"Automatic retrieval of face images from police mug-shot databases is critically important for law enforcement agencies. It can effectively help investigators to locate or narrow down potential suspects. However, in many cases, a photo image of a suspect is not available and the best substitute is often a sketch drawing based on the recollection of an eyewitness. We present a novel photo retrieval system using face sketches. By transforming a photo image into a sketch, we reduce the difference between photo and sketch significantly, thus allowing effective matching between the two. Experiments over a data set containing 188 people clearly demonstrate the efficacy of the algorithm.",
"This paper presents a hierarchical-compositional model of human faces, as a three-layer AND-OR graph to account for the structural variabilities over multiple resolutions. In the AND-OR graph, an AND-node represents a decomposition of certain graphical structure, which expands to a set of OR-nodes with associated relations; an OR-node serves as a switch variable pointing to alternative AND-nodes. Faces are then represented hierarchically: The first layer treats each face as a whole, the second layer refines the local facial parts jointly as a set of individual templates, and the third layer further divides the face into 15 zones and models detail facial features such as eye corners, marks, or wrinkles. Transitions between the layers are realized by measuring the minimum description length (MDL) given the complexity of an input face image. Diverse face representations are formed by drawing from dictionaries of global faces, parts, and skin detail features. A sketch captures the most informative part of a face in a much more concise and potentially robust representation. However, generating good facial sketches is extremely challenging because of the rich facial details and large structural variations, especially in the high-resolution images. The representing power of our generative model is demonstrated by reconstructing high-resolution face images and generating the cartoon facial sketches. Our model is useful for a wide variety of applications, including recognition, nonphotorealisitc rendering, superresolution, and low-bit rate face coding."
]
} |
1710.01453 | 2545239134 | Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training. | Several methods add a refinement step to recover vital details of the input photo to improve the visual quality and face recognition performance. @cite_27 applied a support vector regression (SVR) based model to synthesize the high-frequency information. Similarly, @cite_23 proposed a method called SNS-SRE with two steps, i.e., sparse neighbor selection (SNS) to get an initial estimation and sparse representation based enhancement (SRE) for further improvement. Nevertheless, these post processing steps may brought in side effects, e.g., the results of SNS-SRE are out of sketch styles and become more likely to be natural gray level images. | {
"cite_N": [
"@cite_27",
"@cite_23"
],
"mid": [
"2010037226",
"1981902088"
],
"abstract": [
"The existing face sketch-photo synthesis methods trend to lose some vital details more or less. In this paper, we propose a novel sketch-photo synthesis approach based on support vector regression (SVR) to handle this difficulty. First, we utilize an existing method to acquire the initial estimate of the synthesized image. Then, the final synthesized image is obtained by combining the initial estimate and the SVR based high frequency information together to further enhance the quality of synthesized image. Experimental results on the benchmark database and our new constructed database demonstrate that the proposed method can achieve significant improvement on perceptual quality. Moreover, the synthesized face images can obtain higher recognition rate when used in retrieval system.",
"Sketch-photo synthesis plays an important role in sketch-based face photo retrieval and photo-based face sketch retrieval systems. In this paper, we propose an automatic sketch-photo synthesis and retrieval algorithm based on sparse representation. The proposed sketch-photo synthesis method works at patch level and is composed of two steps: sparse neighbor selection (SNS) for an initial estimate of the pseudoimage (pseudosketch or pseudophoto) and sparse-representation-based enhancement (SRE) for further improving the quality of the synthesized image. SNS can find closely related neighbors adaptively and then generate an initial estimate for the pseudoimage. In SRE, a coupled sparse representation model is first constructed to learn the mapping between sketch patches and photo patches, and a patch-derivative-based sparse representation method is subsequently applied to enhance the quality of the synthesized photos and sketches. Finally, four retrieval modes, namely, sketch-based, photo-based, pseudosketch-based, and pseudophoto-based retrieval are proposed, and a retrieval algorithm is developed by using sparse representation. Extensive experimental results illustrate the effectiveness of the proposed face sketch-photo synthesis and retrieval algorithms."
]
} |
1710.01453 | 2545239134 | Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training. | The convolutional neural network (CNN) has been widely used in computer vision. Its typical structure contained a series of convolutional layers, pooling layers and full connected layers. Recently, CNN has achieved great success in large scale object localization @cite_30 @cite_5 , detection @cite_29 , recognition @cite_14 @cite_28 @cite_16 @cite_32 and classification @cite_1 @cite_41 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_28",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_5",
"@cite_16"
],
"mid": [
"2963542991",
"2192598490",
"2109255472",
"",
"2068730032",
"2163605009",
"1951304353",
"",
"2353169560"
],
"abstract": [
"Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"Understanding human activity is very challenging even with the recently developed 3D depth sensors. To solve this problem, this work investigates a novel deep structured model, which adaptively decomposes an activity instance into temporal parts using the convolutional neural networks. Our model advances the traditional deep learning approaches in two aspects. First, we incorporate latent temporal structure into the deep model, accounting for large temporal variations of diverse human activities. In particular, we utilize the latent variables to decompose the input activity into a number of temporally segmented sub-activities, and accordingly feed them into the parts (i.e. sub-networks) of the deep architecture. Second, we incorporate a radius---margin bound as a regularization term into our deep model, which effectively improves the generalization performance for classification. For model training, we propose a principled learning algorithm that iteratively (i) discovers the optimal latent variables (i.e. the ways of activity decomposition) for all training instances, (ii) updates the classifiers based on the generated features, and (iii) updates the parameters of multi-layer neural networks. In the experiments, our approach is validated on several complex scenarios for human activity recognition and demonstrates superior performances over other state-of-the-art approaches.",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 @math 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 @math faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.",
"",
"Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.",
"",
"Cross-domain visual data matching is one of the fundamental problems in many real-world vision tasks, e.g., matching persons across ID photos and surveillance videos. Conventional approaches to this problem usually involves two steps: i) projecting samples from different domains into a common space, and ii) computing (dis-)similarity in this space based on a certain distance. In this paper, we present a novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination. Moreover, we unify our similarity measure with feature representation learning via deep convolutional neural networks. Specifically, we incorporate the similarity measure matrix into the deep architecture, enabling an end-to-end way of model optimization. We extensively evaluate our generalized similarity model in several challenging cross-domain matching tasks: person re-identification under different views and face verification over different modalities (i.e., faces from still images and videos, older and younger faces, and sketch and photo portraits). The experimental results demonstrate superior performance of our model over other state-of-the-art methods."
]
} |
1710.01453 | 2545239134 | Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training. | Researchers also adopted CNNs to produce dense predictions. An intuitive strategy is to attach the output maps to the topmost layer for directly learning a global predictions. For examples, @cite_21 adopted these strategy for generic object extraction, and @cite_37 applied a similar configuration for pedestrian parsing. Nevertheless, this strategy often produces coarse outputs, since the parameters in networks grow dramatically when enlarging the output maps. To produce finer outputs, @cite_30 applied another network which refined coarse predictions via information from local patches in the depth prediction task. A similar idea was also proposed by @cite_11 , which separately learns global and local processes and uses a fusion network to fuse them into the final estimation of the surface normal. Surprisingly, the global information can be omitted in some situations, e.g., @cite_42 @cite_24 applied a CNN only included three convolutional layers for image super resolution. Though this network has a small receptive field and is trained on local patch samples, it works well for the strict alignment of samples in this specific task. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_21",
"@cite_42",
"@cite_24",
"@cite_11"
],
"mid": [
"2963542991",
"2153410696",
"1993164181",
"54257720",
"",
"1899309388"
],
"abstract": [
"Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"We propose a new Deep Decompositional Network (DDN) for parsing pedestrian images into semantic regions, such as hair, head, body, arms, and legs, where the pedestrians can be heavily occluded. Unlike existing methods based on template matching or Bayesian inference, our approach directly maps low-level visual features to the label maps of body parts with DDN, which is able to accurately estimate complex pose variations with good robustness to occlusions and background clutters. DDN jointly estimates occluded regions and segments body parts by stacking three types of hidden layers: occlusion estimation layers, completion layers, and decomposition layers. The occlusion estimation layers estimate a binary mask, indicating which part of a pedestrian is invisible. The completion layers synthesize low-level features of the invisible part from the original features and the occlusion mask. The decomposition layers directly transform the synthesized visual features to label maps. We devise a new strategy to pre-train these hidden layers, and then fine-tune the entire network using the stochastic gradient descent. Experimental results show that our approach achieves better segmentation accuracy than the state-of-the-art methods on pedestrian images with or without occlusions. Another important contribution of this paper is that it provides a large scale benchmark human parsing dataset that includes 3,673 annotated samples collected from 171 surveillance videos. It is 20 times larger than existing public datasets.",
"In this paper, we investigate a novel reconfigurable part-based model, namely And-Or graph model, to recognize object shapes in images. Our proposed model consists of four layers: leaf-nodes at the bottom are local classifiers for detecting contour fragments; or-nodes above the leaf-nodes function as the switches to activate their child leaf-nodes, making the model reconfigurable during inference; and-nodes in a higher layer capture holistic shape deformations; one root-node on the top, which is also an or-node, activates one of its child and-nodes to deal with large global variations (e.g. different poses and views). We propose a novel structural optimization algorithm to discriminatively train the And-Or model from weakly annotated data. This algorithm iteratively determines the model structures (e.g. the nodes and their layouts) along with the parameter learning. On several challenging datasets, our model demonstrates the effectiveness to perform robust shape-based object detection against background clutter and outperforms the other state-of-the-art approaches. We also release a new shape database with annotations, which includes more than @math challenging shape instances, for recognition and detection.",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"",
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture? We propose to build upon the decades of hard work in 3D scene understanding to design a new CNN architecture for the task of surface normal estimation. We show that incorporating several constraints (man-made, Manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning."
]
} |
1710.01370 | 2764026153 | With the rising popularity of Augmented and Virtual Reality, there is a need for representing humans as virtual avatars in various application domains ranging from remote telepresence, games to medical applications. Besides explicitly modelling 3D avatars, sensing approaches that create person-specific avatars are becoming popular. However, affordable solutions typically suffer from a low visual quality and professional solution are often too expensive to be deployed in nonprofit projects. We present an open-source project, BodyDigitizer, which aims at providing both build instructions and configuration software for a high-resolution photogrammetry-based 3D body scanner. Our system encompasses up to 96 Rasperry PI cameras, active LED lighting, a sturdy frame construction and open-source configuration software. We demonstrate the applicability of the body scanner in a nonprofit Mixed Reality health project. The detailed build instruction and software are available at this http URL. | proposed to scan a human body with a single Kinect @cite_7 , but the results were of low quality as the Cui's approach did not handle non-rigid movement or used color information. @cite_10 fitted a SCAPE model @cite_21 to the 3D and image silhouette data from a Kinect, but failed to reproduce personalized details. used multiple stationary depth-sensors in combination with a turntable @cite_1 . simplified this setup to a single depth-sensor @cite_20 . For this turntable-based approach, open-source build instructions have been made available @cite_8 . @cite_16 relaxed scanning requirements further and presented an approach that works with a handheld scanner. However, all those approaches result in long acquisition times, which can have negative impact on the scanning result (due to human motions) or be unacceptable for a given application scenario (e.g., scanning patients with bodily disorders for medical applications). | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"1977216098",
"",
"1989191365",
"2008550072",
"2001358217",
"2158179171",
"1846486145"
],
"abstract": [
"We describe a method for 3D object scanning by aligning depth and color scans which were taken from around an object with a Kinect camera. Our easy-to-use, cost-effective scanning solution could make 3D scanning technology more accessible to everyday users and turn 3D shape models into a much more widely used asset for many new applications, for instance in community web platforms or online shopping.",
"",
"We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person.",
"Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home-oriented virtual reality (VR) applications.",
"In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor. Our method first captures a few high-quality scans of the user rotating herself at multiple poses from different views. We fit each incomplete scan using template fitting techniques with a generic human template, and register all scans to every pose using global consistency constraints. After registration, these watertight models with different poses are used to train a parametric model in a fashion similar to the SCAPE method. Once the parametric model is built, it can be used as an animitable avatar or more interestingly synthesizing dynamic 3D models from single-view depth videos. Experimental results demonstrate the effectiveness of our system to produce dynamic models.",
"The 3D shape of the human body is useful for applications in fitness, games and apparel. Accurate body scanners, however, are expensive, limiting the availability of 3D body models. We present a method for human shape reconstruction from noisy monocular image and range data using a single inexpensive commodity sensor. The approach combines low-resolution image silhouettes with coarse range data to estimate a parametric model of the body. Accurate 3D shape estimates are obtained by combining multiple monocular views of a person moving in front of the sensor. To cope with varying body pose, we use a SCAPE body model which factors 3D body shape and pose variations. This enables the estimation of a single consistent shape while allowing pose to vary. Additionally, we describe a novel method to minimize the distance between the projected 3D body contour and the image silhouette that uses analytic derivatives of the objective function. We propose a simple method to estimate standard body measurements from the recovered SCAPE model and show that the accuracy of our method is competitive with commercial body scanning systems costing orders of magnitude more.",
"We present a novel scanning system for capturing a full 3D human body model using just a single depth camera and no auxiliary equipment. We claim that data captured from a single Kinect is sufficient to produce a good quality full 3D human model. In this setting, the challenges we face are the sensor's low resolution with random noise and the subject's non-rigid movement when capturing the data. To overcome these challenges, we develop an improved super-resolution algorithm that takes color constraints into account. We then align the super-resolved scans using a combination of automatic rigid and non-rigid registration. As the system is of low price and obtains impressive results in several minutes, full 3D human body scanning technology can now become more accessible to everyday users at home."
]
} |
1710.01370 | 2764026153 | With the rising popularity of Augmented and Virtual Reality, there is a need for representing humans as virtual avatars in various application domains ranging from remote telepresence, games to medical applications. Besides explicitly modelling 3D avatars, sensing approaches that create person-specific avatars are becoming popular. However, affordable solutions typically suffer from a low visual quality and professional solution are often too expensive to be deployed in nonprofit projects. We present an open-source project, BodyDigitizer, which aims at providing both build instructions and configuration software for a high-resolution photogrammetry-based 3D body scanner. Our system encompasses up to 96 Rasperry PI cameras, active LED lighting, a sturdy frame construction and open-source configuration software. We demonstrate the applicability of the body scanner in a nonprofit Mixed Reality health project. The detailed build instruction and software are available at this http URL. | There also have been recent advances in real-time capture @cite_12 . However, also these state of the art approaches typically suffer from visible spatial and temporal artifacts, which might be undesirable for various applications. Further, for some applications the hard real-time constraints might not be necessary. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2532511219"
],
"abstract": [
"We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium."
]
} |
1710.01168 | 2761785940 | Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions between similar subcategories. However, they generally have two limitations: 1) discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of improving classification speed and 2) the training of discriminative localization depends on object or part annotations which are heavily labor-consuming and the obstacle of marching toward practical application. It is highly challenging to address the two limitations simultaneously , while existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: 1) multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost classification accuracy and 2) @math -pathway end-to-end discriminative localization network is proposed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by a region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Comparing with state-of-the-art methods on two widely used fine-grained image classification data sets, our WSDL approach achieves the best accuracy and the efficiency of classification. | Fine-grained image classification is one of the most fundamental and challenging open problems in computer vision, and has drawn extensive attention in both academia and industry. Early works @cite_24 @cite_55 focus on the design of feature representation and classifier based on the basic low-level descriptors, such as SIFT @cite_15 . However, the performance of these methods is limited due to the handcrafted features. Recently, deep learning has achieved great success in the domains of computer vision, speech recognition, natural language processing and so on. Inspired by this, many researchers begin to study on the problem of fine-grained image classification by deep learning @cite_10 @cite_57 @cite_14 @cite_20 @cite_36 , and have achieved great progress. | {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_55",
"@cite_24",
"@cite_57",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"56385144",
"2479109623",
"1988898685",
"2159797991",
"2604134068",
"2151103935",
"1928906481",
"2289708887"
],
"abstract": [
"Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the Caltech-UCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle dierences in some specific parts. Most previous works rely on object part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic finegrained recognition approach which is free of any object part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200- 2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.",
"This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.",
"In image classification tasks, one of the most successful algorithms is the bag-of-features (BoFs) model. Although the BoF model has many advantages, such as simplicity, generality, and scalability, it still suffers from several drawbacks, including the limited semantic description of local descriptors, lack of robust structures upon single visual words, and missing of efficient spatial weighting. To overcome these shortcomings, various techniques have been proposed, such as extracting multiple descriptors, spatial context modeling, and interest region detection. Though they have been proven to improve the BoF model to some extent, there still lacks a coherent scheme to integrate each individual module together. To address the problems above, we propose a novel framework with spatial pooling of complementary features. Our model expands the traditional BoF model on three aspects. First, we propose a new scheme for combining texture and edge-based local features together at the descriptor extraction level. Next, we build geometric visual phrases to model spatial context upon complementary features for midlevel image representation. Finally, based on a smoothed edgemap, a simple and effective spatial weighting scheme is performed to capture the image saliency. We test the proposed framework on several benchmark data sets for image classification. The extensive results show the superior performance of our algorithm over the state-of-the-art methods.",
"Fine-grained image classification is challenging due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Since two different sub-categories is distinguished only by the subtle differences in some specific parts, semantic part localization is crucial for fine-grained image classification. Most previous works improve the accuracy by looking for the semantic parts, but rely heavily upon the use of the object or part annotations of images whose labeling are costly. Recently, some researchers begin to focus on recognizing sub-categories via weakly supervised part detection instead of using the expensive annotations. However, these works ignore the spatial relationship between the object and its parts as well as the interaction of the parts, both of them are helpful to promote part selection. Therefore, this paper proposes a weakly supervised part selection method with spatial constraints for fine-grained image classification, which is free of using any bounding box or part annotations. We first learn a whole-object detector automatically to localize the object through jointly using saliency extraction and co-segmentation. Then two spatial constraints are proposed to select the distinguished parts. The first spatial constraint, called box constraint, defines the relationship between the object and its parts, and aims to ensure that the selected parts are definitely located in the object region, and have the largest overlap with the object region. The second spatial constraint, called parts constraint, defines the relationship of the object's parts, is to reduce the parts' overlap with each other to avoid the information redundancy and ensure the selected parts are the most distinguishing parts from other categories. Combining two spatial constraints promotes parts selection significantly as well as achieves a notable improvement on fine-grained image classification. Experimental results on CUB-200-2011 dataset demonstrate the superiority of our method even compared with those methods using expensive annotations.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what).",
"In this paper, we propose a fine-grained image categorization system with easy deployment. We do not use any object part annotation (weakly supervised) in the training or in the testing stage, but only class labels for training images. Fine-grained image categorization aims to classify objects with only subtle distinctions (e.g., two breeds of dogs that look alike). Most existing works heavily rely on object part detectors to build the correspondence between object parts, which require accurate object or object part annotations at least for training images. The need for expensive object annotations prevents the wide usage of these methods. Instead, we propose to generate multi-scale part proposals from object proposals, select useful part proposals, and use them to compute a global image representation for categorization. This is specially designed for the weakly supervised fine-grained categorization task, because useful parts have been shown to play a critical role in existing annotation-dependent works, but accurate part detectors are hard to acquire. With the proposed image representation, we can further detect and visualize the key (most discriminative) parts in objects of different classes. In the experiments, the proposed weakly supervised method achieves comparable or better accuracy than the state-of-the-art weakly supervised methods and most existing annotation-dependent methods on three challenging datasets. Its success suggests that it is not always necessary to learn expensive object part detectors in fine-grained image categorization."
]
} |
1710.01168 | 2761785940 | Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions between similar subcategories. However, they generally have two limitations: 1) discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of improving classification speed and 2) the training of discriminative localization depends on object or part annotations which are heavily labor-consuming and the obstacle of marching toward practical application. It is highly challenging to address the two limitations simultaneously , while existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: 1) multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost classification accuracy and 2) @math -pathway end-to-end discriminative localization network is proposed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by a region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Comparing with state-of-the-art methods on two widely used fine-grained image classification data sets, our WSDL approach achieves the best accuracy and the efficiency of classification. | Since the discriminative characteristics generally localize in the regions of the object and parts, most existing works generally follow the two-stage pipeline: First localize the object and parts, and then extract their features to train classifiers. For the first stage, some works @cite_31 @cite_1 directly utilize the human annotations (i.e. the bounding box of the object and part locations) to localize the object and parts. Since the human annotations are labor-consuming, some researchers begin to only utilize them in the training phase. propose the Part-based R-CNN @cite_14 to directly utilize the object and part annotations to learn the whole-object and part detectors with geometric constraints between them. This framework is widely used in fine-grained image classification. | {
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_1"
],
"mid": [
"1980526845",
"56385144",
"2118696714"
],
"abstract": [
"As a special topic in computer vision, fine-grained visual categorization (FGVC) has been attracting growing attention these years. Different with traditional image classification tasks in which objects have large inter-class variation, the visual concepts in the fine-grained datasets, such as hundreds of bird species, often have very similar semantics. Due to the large inter-class similarity, it is very difficult to classify the objects without locating really discriminative features, therefore it becomes more important for the algorithm to make full use of the part information in order to train a robust model. In this paper, we propose a powerful flowchart named Hierarchical Part Matching (HPM) to cope with fine-grained classification tasks. We extend the Bag-of-Features (BoF) model by introducing several novel modules to integrate into image representation, including foreground inference and segmentation, Hierarchical Structure Learning (HSL), and Geometric Phrase Pooling (GPP). We verify in experiments that our algorithm achieves the state-of-the-art classification accuracy in the Caltech-UCSD-Birds-200-2011 dataset by making full use of the ground-truth part annotations.",
"Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the Caltech-UCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"From a set of images in a particular domain, labeled with part locations and class, we present a method to automatically learn a large and diverse set of highly discriminative intermediate features that we call Part-based One-vs.-One Features (POOFs). Each of these features specializes in discrimination between two particular classes based on the appearance at a particular part. We demonstrate the particular usefulness of these features for fine-grained visual categorization with new state-of-the-art results on bird species identification using the Caltech UCSD Birds (CUB) dataset and parity with the best existing results in face verification on the Labeled Faces in the Wild (LFW) dataset. Finally, we demonstrate the particular advantage of POOFs when training data is scarce."
]
} |
1710.01168 | 2761785940 | Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions between similar subcategories. However, they generally have two limitations: 1) discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of improving classification speed and 2) the training of discriminative localization depends on object or part annotations which are heavily labor-consuming and the obstacle of marching toward practical application. It is highly challenging to address the two limitations simultaneously , while existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: 1) multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost classification accuracy and 2) @math -pathway end-to-end discriminative localization network is proposed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by a region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Comparing with state-of-the-art methods on two widely used fine-grained image classification data sets, our WSDL approach achieves the best accuracy and the efficiency of classification. | Recently, fine-grained image classification methods begin to focus on how to achieve promising performance without using any object or part annotations. The first work under such weakly supervised setting is the two-level attention model @cite_10 , which utilizes the attention mechanism of the CNNs to select region proposals corresponding to the object and parts, and achieves promising results even compared with those methods relying on the object and part annotations. Inspired by this work, @cite_36 incorporate deep convolutional filters for both parts selection and description. He and Peng @cite_57 integrate two spatial constraints for improving the performance of parts selection. | {
"cite_N": [
"@cite_36",
"@cite_57",
"@cite_10"
],
"mid": [
"2479109623",
"2604134068",
"1928906481"
],
"abstract": [
"Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle dierences in some specific parts. Most previous works rely on object part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic finegrained recognition approach which is free of any object part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200- 2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.",
"Fine-grained image classification is challenging due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Since two different sub-categories is distinguished only by the subtle differences in some specific parts, semantic part localization is crucial for fine-grained image classification. Most previous works improve the accuracy by looking for the semantic parts, but rely heavily upon the use of the object or part annotations of images whose labeling are costly. Recently, some researchers begin to focus on recognizing sub-categories via weakly supervised part detection instead of using the expensive annotations. However, these works ignore the spatial relationship between the object and its parts as well as the interaction of the parts, both of them are helpful to promote part selection. Therefore, this paper proposes a weakly supervised part selection method with spatial constraints for fine-grained image classification, which is free of using any bounding box or part annotations. We first learn a whole-object detector automatically to localize the object through jointly using saliency extraction and co-segmentation. Then two spatial constraints are proposed to select the distinguished parts. The first spatial constraint, called box constraint, defines the relationship between the object and its parts, and aims to ensure that the selected parts are definitely located in the object region, and have the largest overlap with the object region. The second spatial constraint, called parts constraint, defines the relationship of the object's parts, is to reduce the parts' overlap with each other to avoid the information redundancy and ensure the selected parts are the most distinguishing parts from other categories. Combining two spatial constraints promotes parts selection significantly as well as achieves a notable improvement on fine-grained image classification. Experimental results on CUB-200-2011 dataset demonstrate the superiority of our method even compared with those methods using expensive annotations.",
"Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what)."
]
} |
1710.01168 | 2761785940 | Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions between similar subcategories. However, they generally have two limitations: 1) discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of improving classification speed and 2) the training of discriminative localization depends on object or part annotations which are heavily labor-consuming and the obstacle of marching toward practical application. It is highly challenging to address the two limitations simultaneously , while existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: 1) multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost classification accuracy and 2) @math -pathway end-to-end discriminative localization network is proposed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by a region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Comparing with state-of-the-art methods on two widely used fine-grained image classification data sets, our WSDL approach achieves the best accuracy and the efficiency of classification. | Object detection is one of the most fundamental and challenging open problems in computer vision, which not only recognizes the objects but also localizes them in the images. Like fine-grained image classification, early works are mainly based on basic low-level features, such as SIFT @cite_15 and HOG @cite_41 . However, from 2010 onward, the progress of object detection based on these handcrafted features slows down. Due to the great success of deep learning in the competition of ImageNet LSVRC-2012, deep learning has been widely employed in computer vision, including object detection. We divide the object detection methods based on CNNs into 2 groups by the annotations used: (1) Supervised object detection, which needs the ground truth bounding box of the object. (2) Weakly supervised object detection, which does not need the ground truth bounding box of the object, and only needs image-level labels. | {
"cite_N": [
"@cite_41",
"@cite_15"
],
"mid": [
"2161969291",
"2151103935"
],
"abstract": [
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
} |
1710.00775 | 2964301018 | A graph environment must be explored by a collection of mobile robots. Some of the robots, a priori unknown, may turn out to be unreliable. The graph is weighted and each node is assigned a deadline. The exploration is successful if each node of the graph is visited before its deadline by a reliable robot. The edge weight corresponds to the time needed by a robot to traverse the edge. Given the number of robots which may crash, is it possible to design an algorithm, which will always guarantee the exploration, independently of the choice of the subset of unreliable robots by the adversary? We find the optimal time, during which the graph may be explored. Our approach permits to find the maximal number of robots, which may turn out to be unreliable, and the graph is still guaranteed to be explored. We concentrate on line graphs and rings, for which we give positive results. We start with the case of the collections involving only reliable robots. We give algorithms finding optimal times needed for exploration when the robots are assigned to fixed initial positions as well as when such starting positions may be determined by the algorithm. We extend our consideration to the case when some number of robots may be unreliable. Our most surprising result is that solving the line exploration problem with robots at given positions, which may involve crash-faulty ones, is NP-hard. The same problem has polynomial solutions for a ring and for the case when the initial robots' positions on the line are arbitrary. The exploration problem is shown to be NP-hard for star graphs, even when the team consists of only two reliable robots. | Searching a graph with one or more searchers has been widely studied in the mathematics literature (see, e.g. @cite_8 for a survey). There is extensive literature on linear search (referring to searching a line in the continuous or discrete model), e.g., see @cite_6 for optimal deterministic linear search and @cite_15 for algorithms incorporating a when a robot changes direction during the search. Variants of search using collections of robots has also been investigated. The robots can employ either communication (at any distance) or communication, where communication is only possible among co-located robots. For example, the problem of @cite_9 is essentially a search problem where search is completed only when the target is reached by the last robot. Linear group search in the face-to-face communication model has also been studied with robots that either operate at the same speed or with a pair of robots having distinct maximal speeds @cite_11 @cite_7 . Linear search with multiple robots where some fraction of the robots may exhibit either crash faults or is studied in @cite_10 and @cite_18 , respectively. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"32433637",
"2106518318",
"1766799846",
"2019729116",
"1995434882",
"2477849570",
"2548659391"
],
"abstract": [
"",
"In this paper we consider the group search problem, or evacu- ation problem, in which k mobile entities ( ( M E )s) located on the line perform search for a specific destination. The ( M E )s are initially placed at the same origin on the line L and the target is located at an unknown distance d, either to the left or to the right from the origin. All ( M E )s must simultaneously occupy the destination, and the goal is to minimize the time necessary for this to happen. The problem with k = 1 is known as the cow-path problem, and the time required for this problem is known to be 9d − o(d) in the worst case (when the cow moves at unit speed); it is also known that this is the case for k ≥ 1 unit-speed ( M E )s. In this paper we present a clear argument for this claim by showing a rather counter-intuitive result. Namely, independent of the number of ( M E )s, group search cannot be performed faster than in time 9d − o(d). We also examine the case of k = 2 ( M E )s with different speeds, showing a surprising result that the bound of 9d can be achieved when one ( M E ) has unit speed, and the other ( M E ) moves with speed at least 1 3.",
"Graph searching encompasses a wide variety of combinatorial problems related to the problem of capturing a fugitive residing in a graph using the minimum number of searchers. In this annotated bibliography, we give an elementary classification of problems and results related to graph searching and provide a source of bibliographical references on this field.",
"Assume that two robots are located at the centre of a unit disk. Their goal is to evacuate from the disk through an exit at an unknown location on the boundary of the disk. At any time the robots can move anywhere they choose on the disk, independently of each other, with maximum speed @math . The robots can cooperate by exchanging information whenever they meet. We study algorithms for the two robots to minimize the evacuation time: the time when both robots reach the exit. In [CGGKMP14] the authors gave an algorithm defining trajectories for the two robots yielding evacuation time at most @math and also proved that any algorithm has evacuation time at least @math . We improve both the upper and lower bounds on the evacuation time of a unit disk. Namely, we present a new non-trivial algorithm whose evacuation time is at most @math and show that any algorithm has evacuation time at least @math . To achieve the upper bound, we designed an algorithm which non-intuitively proposes a forced meeting between the two robots, even if the exit has not been found by either of them.",
"In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to give a realistic cost measure for a robot moving in the plane. We also examine the effect of decreasing the amount of a priori information given to search problems. Problems of this type are very simple analogues of non-trivial problems on searching an unbounded region, processing digitized images, and robot navigation. We show that for some simple search problems, knowing the general direction of the goal is much more informative than knowing the distance to the goal.",
"We consider the problem of searching for an object on a line at an unknown distance OPT from the original position of the searcher, in the presence of a cost of d for each time the searcher changes direction. This is a generalization of the well-studied linear-search problem. We describe a strategy that is guaranteed to find the object at a cost of at most 9 ċ OPT + 2d, which has the optimal competitive ratio 9 with respect to OPT plus the minimum corresponding additive term. Our argument for upper and lower bound uses an infinite linear program, which we solve by experimental solution of an infinite series of approximating finite linear programs, estimating the limits, and solving the resulting recurrences for an explicit proof of optimality. We feel that this technique is interesting in its own right and should help solve other searching problems. In particular, we consider the star search or cowpath problem with turn cost, where the hidden object is placed on one of m rays emanating from the original position of the searcher. For this problem we give a tight bound of (1 + 2mm (m - 1)m-1)OPT + m((m (m - 1))m-1 - 1)d. We also discuss tradeoffs between the corresponding coefficients and we consider randomized strategies on the line.",
"We consider the problem of searching on a line using n mobile robots, of which at most f are faulty, and the remaining are reliable. The robots start at the same location and move in parallel along the line with the same speed. There is a target placed on the line at a location unknown to the robots. Reliable robots can find the target when they reach its location, but faulty robots cannot detect the target. Our goal is to design a parallel algorithm minimizing the competitive ratio, represented by the worst case ratio between the time of arrival of the first reliable robot at the target, and the distance from the source to the target. If n ≥ 2f+2, there is a simple algorithm with competitive ratio 1. For f Our search algorithm is easily seen to be optimal for the case n=f+1. We also show that as @math tends to ∞ the competitive ratio of our algorithm for the case $n = 2f+1 approaches 3 and this is optimal. More precisely, we show that asymptotically, the competitive ratio of our proportional schedule algorithm A(2f+1,f) is at most 3 + 4ln n n , while any search algorithm has a lower bound 3 + 2ln n n on its competitive ratio.",
"Two mobile robots are initially placed at the same point on an infinite line. Each robot may move on the line in either direction not exceeding its maximal speed. The robots need to find a stationary target placed at an unknown location on the line. The search is completed when both robots arrive at the target point. The target is discovered at the moment when either robot arrives at its position. The robot knowing the placement of the target may communicate it to the other robot. We look for the algorithm with the shortest possible search time (i.e. the worst-case time at which both robots meet at the target) measured as a function of the target distance from the origin (i.e. the time required to travel directly from the starting point to the target at unit velocity)."
]
} |
1710.00775 | 2964301018 | A graph environment must be explored by a collection of mobile robots. Some of the robots, a priori unknown, may turn out to be unreliable. The graph is weighted and each node is assigned a deadline. The exploration is successful if each node of the graph is visited before its deadline by a reliable robot. The edge weight corresponds to the time needed by a robot to traverse the edge. Given the number of robots which may crash, is it possible to design an algorithm, which will always guarantee the exploration, independently of the choice of the subset of unreliable robots by the adversary? We find the optimal time, during which the graph may be explored. Our approach permits to find the maximal number of robots, which may turn out to be unreliable, and the graph is still guaranteed to be explored. We concentrate on line graphs and rings, for which we give positive results. We start with the case of the collections involving only reliable robots. We give algorithms finding optimal times needed for exploration when the robots are assigned to fixed initial positions as well as when such starting positions may be determined by the algorithm. We extend our consideration to the case when some number of robots may be unreliable. Our most surprising result is that solving the line exploration problem with robots at given positions, which may involve crash-faulty ones, is NP-hard. The same problem has polynomial solutions for a ring and for the case when the initial robots' positions on the line are arbitrary. The exploration problem is shown to be NP-hard for star graphs, even when the team consists of only two reliable robots. | The (Directed) Rural Postman Problem (DRPP) is a general case of the Chinese Postman Problem where a subset of the set of arcs of a given (directed) graph is 'required' to be traversed at minimum cost. @cite_3 presents a branch and bound algorithm for the exact solution of the DRPP based on bounds computed from Lagrangian Relaxation. @cite_4 studies the polyhedron associated with the Rural Postman Problem and characterizes its facial structure. @cite_2 gives a survey of the directed and undirected rural postman problem and also discusses applications. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_2"
],
"mid": [
"2072370153",
"74564317",
"2126141749"
],
"abstract": [
"Abstract In this paper we study the polyhedron associated with the Rural Postman Problem (RPP). Because the RPP is NP-hard, we cannot expect to find a complete description of the rural postman polyhedron of a general graph, but a partial knowledge of such a description frequently proves to be useful for both theoretical and computational purposes. We have tried to characterize the facial structure of this unbounded full-dimensional polyhedron. Sets of valid inequalities inducing facets have been studied as well as their use in a cutting-plane algorithm. The application of this algorithm to a set of RPP instances taken from the literature and two instances of larger size taken from a real world graph is described. All these instances were solved to optimality.",
"The Directed Rural Postman Problem (DRPP) is a general case of the Chinese Postman Problem where a subset of the set of arcs of a given directed graph is ‘required’ to be traversed at minimum cost. If this subset does not form a weakly connected graph but forms a number of disconnected components the problem is NP-Complete, and is also a generalization of the asymmetric Travelling Salesman Problem. In this paper we present a branch and bound algorithm for the exact solution of the DRPP based on bounds computed from Lagrangean Relaxation (with shortest spanning arborescence sub-problems) and on the fathoming of some of the tree nodes by the solution of minimum cost flow problems. Computational results are given for graphs of up to 80 vertices, 179 arcs and 71 ‘required’ arcs.",
"This is the second half of a two-part survey on arc routing problems. The first part appeared in the March-April 1995 issue of this journal. Here, the rural postman problem RPP is reviewed. The paper is organized as follows: applications, the undirected RPP, the directed RPP, the stacker crane problem, and the capacitated arc routing problem."
]
} |
1710.00775 | 2964301018 | A graph environment must be explored by a collection of mobile robots. Some of the robots, a priori unknown, may turn out to be unreliable. The graph is weighted and each node is assigned a deadline. The exploration is successful if each node of the graph is visited before its deadline by a reliable robot. The edge weight corresponds to the time needed by a robot to traverse the edge. Given the number of robots which may crash, is it possible to design an algorithm, which will always guarantee the exploration, independently of the choice of the subset of unreliable robots by the adversary? We find the optimal time, during which the graph may be explored. Our approach permits to find the maximal number of robots, which may turn out to be unreliable, and the graph is still guaranteed to be explored. We concentrate on line graphs and rings, for which we give positive results. We start with the case of the collections involving only reliable robots. We give algorithms finding optimal times needed for exploration when the robots are assigned to fixed initial positions as well as when such starting positions may be determined by the algorithm. We extend our consideration to the case when some number of robots may be unreliable. Our most surprising result is that solving the line exploration problem with robots at given positions, which may involve crash-faulty ones, is NP-hard. The same problem has polynomial solutions for a ring and for the case when the initial robots' positions on the line are arbitrary. The exploration problem is shown to be NP-hard for star graphs, even when the team consists of only two reliable robots. | A scheduling problem considered by the research community concerns @math jobs, each to be processed by a single machine, subject to arbitrary given precedence constraints; associated with each job @math is a known processing time @math and a monotone nondecreasing cost function @math , giving the cost that is incurred by the completion of that job at time @math . @cite_19 gives an efficient computational procedure for the problem of finding a sequence which will minimize the maximum of the incurred costs. Further, @cite_19 also studies a class of time-constrained vehicle routing and scheduling problems that may be encountered in several transportation distribution environments. In the single-vehicle scheduling problem with time window constraints, a vehicle has to visit a set of sites on a graph, and each site must be visited after its ready time but no later than its deadline. @cite_16 studies the problem of minimizing the total time taken to visit all sites. @cite_13 considers the problem of determining whether there exists a schedule on two identical processors that executes each task in the time interval between its start-time and deadline and presents an @math algorithm that constructs such a schedule whenever one exists. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_13"
],
"mid": [
"2115299891",
"2042251310",
"2001364911"
],
"abstract": [
"Suppose n jobs are each to be processed by a single machine, subject to arbitrary given precedence constraints. Associated with each job j is a known processing time aj, and a monotone nondecreasing cost function cjt, giving the cost that is incurred by the completion of that job at time t. The problem is to find a sequence which will minimize the maximum of the incurred costs. An efficient computational procedure is given for this problem, generalizing and simplifying previous results of the present author and J. M. Moore.",
"In the single-vehicle scheduling problem with time window constraints, a vehicle has to visit a set of sites on a graph, and each site must be visited after its ready time but no later than its deadline. The goal is to minimize the total time taken to visit all sites. We prove the conjecture proposed by : if the topological graph is a straight line, the problems are NP-hard for both part and tour version. In addition, we give an O(n2) algorithm to solve a special case where all n sites have a common ready time. This algorithm illustrates a duality relationship between the vehicle scheduling problems with arbitrary ready times and that with arbitrary deadlines on a straight line. Copyright © 1999 John Wiley & Sons, Ltd.",
"Given a set @math of tasks, each @math having execution time 1, an integer start-time @math and a deadline @math , along with precedence constraints among the tasks, we examine the problem of determining whether there exists a schedule on two identical processors that executes each task in the time interval between its start-time and deadline. We present an @math algorithm that constructs such a schedule whenever one exists. The algorithm may also be used in a binary search mode to find the shortest such schedule or to find a schedule that minimizes maximum “tardiness”. A number of natural extensions of this problem are seen to be @math complete and hence probably intractable."
]
} |
1710.00775 | 2964301018 | A graph environment must be explored by a collection of mobile robots. Some of the robots, a priori unknown, may turn out to be unreliable. The graph is weighted and each node is assigned a deadline. The exploration is successful if each node of the graph is visited before its deadline by a reliable robot. The edge weight corresponds to the time needed by a robot to traverse the edge. Given the number of robots which may crash, is it possible to design an algorithm, which will always guarantee the exploration, independently of the choice of the subset of unreliable robots by the adversary? We find the optimal time, during which the graph may be explored. Our approach permits to find the maximal number of robots, which may turn out to be unreliable, and the graph is still guaranteed to be explored. We concentrate on line graphs and rings, for which we give positive results. We start with the case of the collections involving only reliable robots. We give algorithms finding optimal times needed for exploration when the robots are assigned to fixed initial positions as well as when such starting positions may be determined by the algorithm. We extend our consideration to the case when some number of robots may be unreliable. Our most surprising result is that solving the line exploration problem with robots at given positions, which may involve crash-faulty ones, is NP-hard. The same problem has polynomial solutions for a ring and for the case when the initial robots' positions on the line are arbitrary. The exploration problem is shown to be NP-hard for star graphs, even when the team consists of only two reliable robots. | The author of @cite_17 resolves the complexity status of the well-known Traveling Repairman Problem on a line (Line-TRP) with general processing times at the request locations and deadline restrictions by showing that it is strongly NP-complete. @cite_0 considers the problem of finding a lower and an upper bound for the minimum number of vehicles needed to serve all locations of the multiple traveling salesman problem with time windows in two types of precedence graphs: the start-time precedence graph and the end-time precedence graph. @cite_1 considers the pinwheel'', a formalization of a scheduling problem arising in satellite transmissions whereby a piece of information is transmitted for a set duration, then the satellite proceeds with another piece of information while a ground station receiving from several such satellites and wishing to avoid data loss faces a real-time scheduling problem on whether a useful'' representation of the corresponding schedule exists. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_17"
],
"mid": [
"843108676",
"2118217956",
"2038044577"
],
"abstract": [
"This paper deals with nding a lower and an upper bound for the minimum number of vehicles needed to serve all locations of the multiple traveling salesman problem with time windows. The two types of precedence graphs are introduced the start-time precedence graph and the end-time precedence graph. The bounds are generated by covering the precedence graph with minimum number of paths. Instances for which bounds are tight are presented, as well as instances for which bounds can be arbitrary bad. The closeness of such instances is discussed.",
"Some satellites transmit a piece of information for a set duration, then proceed with another piece of information. A ground station receiving from several such satellites and wishing to avoid data loss faces a real-time scheduling problem. The pinwheel is a formalization of this problem. Given a multiset A of integers=(a sub 1 , a sub 2 , . . ., a sub n ), a successful schedule S is an infinite sequence over (1, 2, . . ., n) such that any subsequence of a sub i (1 >",
"This paper resolves the complexity status of the well-known Traveling Repairman Problem on a line (Line-TRP) with general processing times at the request locations and deadline restrictions. It has long remained an open research question whether an exact solution procedure with pseudo-polynomial running time can be developed for this version of the Traveling Repairman Problem that was known to be at least binary NP-hard. The presented proof of strong NP-completeness of the problem is provided by a reduction from 3-PARTITION. Since recent literature provides significant new results for further variants of the Line-TRP and the Line-TSP, a brief updated overview of the complexity status of the different variants is given. Another major contribution is that a practically applicable exact best-first search Branch&Bound approach that optimally solves instances of real-world size in reasonable time is proposed. By applying sophisticated dominance rules as well as lower bounds, the number of enumerated partial solutions is kept limited. The efficiency of the new approach and the applied instruments is validated by a computational study."
]
} |
1710.00775 | 2964301018 | A graph environment must be explored by a collection of mobile robots. Some of the robots, a priori unknown, may turn out to be unreliable. The graph is weighted and each node is assigned a deadline. The exploration is successful if each node of the graph is visited before its deadline by a reliable robot. The edge weight corresponds to the time needed by a robot to traverse the edge. Given the number of robots which may crash, is it possible to design an algorithm, which will always guarantee the exploration, independently of the choice of the subset of unreliable robots by the adversary? We find the optimal time, during which the graph may be explored. Our approach permits to find the maximal number of robots, which may turn out to be unreliable, and the graph is still guaranteed to be explored. We concentrate on line graphs and rings, for which we give positive results. We start with the case of the collections involving only reliable robots. We give algorithms finding optimal times needed for exploration when the robots are assigned to fixed initial positions as well as when such starting positions may be determined by the algorithm. We extend our consideration to the case when some number of robots may be unreliable. Our most surprising result is that solving the line exploration problem with robots at given positions, which may involve crash-faulty ones, is NP-hard. The same problem has polynomial solutions for a ring and for the case when the initial robots' positions on the line are arbitrary. The exploration problem is shown to be NP-hard for star graphs, even when the team consists of only two reliable robots. | The work of @cite_22 is very related to our work in that jobs are located on a line. Each job has an associated processing time, and whose execution has to start within a prespecified time window. The paper considers the problems of minimizing (a) the time by which all jobs are executed (traveling salesman problem), and (b) the sum of the waiting times of the jobs (traveling repairman problem). Also related is the research on Graphs with dynamically evolving links (also known as time varying graphs) which has been explored extensively in theoretical computer science (e.g., see @cite_21 @cite_20 @cite_5 ). | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_22",
"@cite_20"
],
"mid": [
"2120741723",
"2952844013",
"1995284651",
""
],
"abstract": [
"In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T >= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T > 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2 T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.",
"The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems -- delay-tolerant networks, opportunistic-mobility networks, social networks -- obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe; and the formal models proposed so far to express some specific concepts are components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms, and results found in the literature into a unified framework, which we call TVG (for time-varying graphs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are a-temporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.",
"Consider a complete directed graph in which each arc has a given length. There is a set ofjobs, each job i located at some node of the graph, with an associated processing time hi, and whose execution has to start within a prespecified time window [r;, di]. We have a single server that can move on the arcs of the graph, at unit speed, and that has to execute all of the jobs within their respective time windows. We consider the following two problems: (a) minimize the time by which all jobs are executed (traveling salesman problem) and (b) minimize the sum of the waiting times of the jobs (traveling repairman problem). We focus on the following two special cases: (a) The jobs are located on a line and (b) the number of nodes of the graph is bounded by some integer constant B. Furthermore, we consider in detail the special cases where (a) all of the processing times are 0, (b) all of the release times ri are 0, and (c) all of the deadlines di are infinite. For many of the resulting problem combinations, we settle their complexity either by establishing NP-completeness or by presenting polynomial (or pseudopolynomial) time algorithms. Finally, we derive algorithms for the case where, for any time t, the number of jobs that can be executed at that time is bounded.",
""
]
} |
1710.00925 | 2963644257 | Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. Traditionally head pose is computed by estimating some keypoints from the target face and solving the 2D to 3D correspondence problem with a mean human head model. We argue that this is a fragile method because it relies entirely on landmark detection performance, the extraneous head model and an ad-hoc fitting step. We present an elegant and robust way to determine pose by training a multi-loss convolutional neural network on 300W-LP, a large synthetically expanded dataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from image intensities through joint binned pose classification and regression. We present empirical tests on common in-the-wild pose benchmark datasets which show state-of-the-art results. Additionally we test our method on a dataset usually used for pose estimation using depth and start to close the gap with state-of-the-art depth pose methods. We open-source our training and testing code as well as release our pre-trained models. | Recently, facial landmark detectors which have become very accurate @cite_19 @cite_1 @cite_32 , have been popular for the task of pose estimation. | {
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_32"
],
"mid": [
"2605105738",
"2964014798",
"2589255576"
],
"abstract": [
"This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b)We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date ( 230,000 images). (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https: www.adrianbulat.com face-alignment",
"Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods.",
"Keypoint detection is one of the most importantpre-processing steps in tasks such as face modeling, recognitionand verification. In this paper, we present an iterative methodfor Keypoint Estimation and Pose prediction of unconstrainedfaces by Learning Efficient H-CNN Regressors (KEPLER) foraddressing the face alignment problem. Recent state of the artmethods have shown improvements in face keypoint detectionby employing Convolution Neural Networks (CNNs). Althougha simple feed forward neural network can learn the mappingbetween input and output spaces, it cannot learn the inherentstructural dependencies. We present a novel architecture calledH-CNN (Heatmap-CNN) which captures structured global andlocal features and thus favors accurate keypoint detecion. H-CNNis jointly trained on the visibility, fiducials and 3D-pose of theface. As the iterations proceed, the error decreases making thegradients small and thus requiring efficient training of DCNNs tomitigate this. KEPLER performs global corrections in pose andfiducials for the first four iterations followed by local correctionsin a subsequent stage. As a by-product, KEPLER also provides3D pose (pitch, yaw and roll) of the face accurately. In thispaper, we show that without using any 3D information, KEPLERoutperforms state of the art methods for alignment on challengingdatasets such as AFW [38] and AFLW [17]."
]
} |
1710.00925 | 2963644257 | Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. Traditionally head pose is computed by estimating some keypoints from the target face and solving the 2D to 3D correspondence problem with a mean human head model. We argue that this is a fragile method because it relies entirely on landmark detection performance, the extraneous head model and an ad-hoc fitting step. We present an elegant and robust way to determine pose by training a multi-loss convolutional neural network on 300W-LP, a large synthetically expanded dataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from image intensities through joint binned pose classification and regression. We present empirical tests on common in-the-wild pose benchmark datasets which show state-of-the-art results. Additionally we test our method on a dataset usually used for pose estimation using depth and start to close the gap with state-of-the-art depth pose methods. We open-source our training and testing code as well as release our pre-trained models. | Also recently, work has developed on estimating head pose using neural networks. @cite_4 presents an in-depth study of relatively shallow networks trained using a regression loss on the AFLW dataset. In KEPLER @cite_32 the authors present a modified GoogleNet architecture which predicts facial keypoints and pose jointly. They use the coarse pose supervision from the AFLW dataset in order to improve landmark detection. Two works dwell on building one network to fulfill various prediction tasks regarding facial analysis. Hyperface @cite_30 is a CNN that sets out to detect faces, determine gender, find landmarks and estimate head pose at once. It does this by using an R-CNN @cite_16 based approach and a modified AlexNet architecture which fuses intermediate convolutional layer outputs and adds separate fully-connected networks to predict each subtask. All-In-One Convolutional Neural Network @cite_28 for Face Analysis adds smile, age estimation and facial recognition to the former prediction tasks. We compare our results to all of these works. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_28",
"@cite_32",
"@cite_16"
],
"mid": [
"2963377935",
"2621061298",
"2548780814",
"2589255576",
"2102605133"
],
"abstract": [
"We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.",
"Abstract Head pose estimation is an old problem that is recently receiving new attention because of possible applications in human-robot interaction, augmented reality and driving assistance. However, most of the existing work has been tested in controlled environments and is not robust enough for real-world applications. In order to handle these limitations we propose an approach based on Convolutional Neural Networks (CNNs) supplemented with the most recent techniques adopted from the deep learning community. We evaluate the performance of four architectures on recently released in-the-wild datasets. Moreover, we investigate the use of dropout and adaptive gradient methods giving a contribution to their ongoing validation. The results show that joining CNNs and adaptive gradient methods leads to the state-of-the-art in unconstrained head pose estimation.",
"We present a multi-purpose algorithm for simultaneousface detection, face alignment, pose estimation, genderrecognition, smile detection, age estimation and face recognitionusing a single deep convolutional neural network (CNN). Theproposed method employs a multi-task learning framework thatregularizes the shared parameters of CNN and builds a synergyamong different domains and tasks. Extensive experimentsshow that the network has a better understanding of face andachieves state-of-the-art result for most of these tasks",
"Keypoint detection is one of the most importantpre-processing steps in tasks such as face modeling, recognitionand verification. In this paper, we present an iterative methodfor Keypoint Estimation and Pose prediction of unconstrainedfaces by Learning Efficient H-CNN Regressors (KEPLER) foraddressing the face alignment problem. Recent state of the artmethods have shown improvements in face keypoint detectionby employing Convolution Neural Networks (CNNs). Althougha simple feed forward neural network can learn the mappingbetween input and output spaces, it cannot learn the inherentstructural dependencies. We present a novel architecture calledH-CNN (Heatmap-CNN) which captures structured global andlocal features and thus favors accurate keypoint detecion. H-CNNis jointly trained on the visibility, fiducials and 3D-pose of theface. As the iterations proceed, the error decreases making thegradients small and thus requiring efficient training of DCNNs tomitigate this. KEPLER performs global corrections in pose andfiducials for the first four iterations followed by local correctionsin a subsequent stage. As a by-product, KEPLER also provides3D pose (pitch, yaw and roll) of the face accurately. In thispaper, we show that without using any 3D information, KEPLERoutperforms state of the art methods for alignment on challengingdatasets such as AFW [38] and AFLW [17].",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1710.00925 | 2963644257 | Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. Traditionally head pose is computed by estimating some keypoints from the target face and solving the 2D to 3D correspondence problem with a mean human head model. We argue that this is a fragile method because it relies entirely on landmark detection performance, the extraneous head model and an ad-hoc fitting step. We present an elegant and robust way to determine pose by training a multi-loss convolutional neural network on 300W-LP, a large synthetically expanded dataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from image intensities through joint binned pose classification and regression. We present empirical tests on common in-the-wild pose benchmark datasets which show state-of-the-art results. Additionally we test our method on a dataset usually used for pose estimation using depth and start to close the gap with state-of-the-art depth pose methods. We open-source our training and testing code as well as release our pre-trained models. | @cite_6 also argue for landmark-free head pose estimation. They regress 3D head pose using a simple CNN and focus on facial alignment using the predicted head pose. They demonstrate the success of their approach by improving facial recognition accuracy using their facial alignment pipeline. They do not directly evaluate their head pose estimation results. This differs from our work since we directly evaluate and compare our head pose results extensively on annotated datasets. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2964171387"
],
"abstract": [
"We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors."
]
} |
1710.00920 | 2762899171 | We present a deep learning framework for real-time speech-driven 3D facial animation from just raw waveforms. Our deep neural network directly maps an input sequence of speech audio to a series of micro facial action unit activations and head rotations to drive a 3D blendshape face model. In particular, our deep model is able to learn the latent representations of time-varying contextual information and affective states within the speech. Hence, our model not only activates appropriate facial action units at inference to depict different utterance generating actions, in the form of lip movements, but also, without any assumption, automatically estimates emotional intensity of the speaker and reproduces her ever-changing affective states by adjusting strength of facial unit activations. For example, in a happy speech, the mouth opens wider than normal, while other facial units are relaxed; or in a surprised state, both eyebrows raise higher. Experiments on a diverse audiovisual corpus of different actors across a wide range of emotional states show interesting and promising results of our approach. Being speaker-independent, our generalized model is readily applicable to various tasks in human-machine interaction and animation. | , is a research topic where an avatar is animated to imitate human talking. Various approaches have been developed to synthesize a face model driven by either speech audio @cite_32 @cite_26 @cite_3 or transcripts @cite_38 @cite_5 . Essentially, every talking head animation technique develops a mapping from an input speech to visual features, and can be formulated as a classification or regression task. Classification approaches usually identify phonetic unit (phonemes) from speech and map to visual units (visemes) based on specific rules, and animation is generated by morphing these key images. On the other hand, regression approaches can directly generate visual parameters and their trajectories from input features. Early research on talking head used Hidden Markov Models (HMMs) with some successes @cite_2 @cite_1 , despite certain limitations of HMM framework such as oversmoothing trajectory. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_2",
"@cite_5"
],
"mid": [
"2043440654",
"2114336453",
"2113713975",
"1779294284",
"71477782",
"",
"2150133190"
],
"abstract": [
"In this paper we investigate the development of an expressive facial animation system from publicly available components. There is a great body of work on face modeling, facial animation and conversational agents. However, most of the current research either targets a specific aspect of a conversational agent or is tailored to systems that are not publicly available. We propose a high quality facial animation system that can be easily built based on affordable off-the-shelf components. The proposed system is modular, extensible, efficient and suitable for a wide range of applications that require expressive speaking avatars. We demonstrate the effectiveness of the system with two applications: (a) a text-to-speech synthesizer with expression control and (b) a conversational agent that can react to simple phrases.",
"This paper presents an articulatory modelling approach to convert acoustic speech into realistic mouth animation. We directly model the movements of articulators, such as lips, tongue, and teeth, using a dynamic Bayesian network (DBN)-based audio-visual articulatory model (AVAM). A multiple-stream structure with a shared articulator layer is adopted in the model to synchronously associate the two building blocks of speech, i.e., audio and video. This model not only describes the synchronization between visual articulatory movements and audio speech, but also reflects the linguistic fact that different articulators evolve asynchronously. We also present a Baum-Welch DBN inversion (DBNI) algorithm to generate optimal facial parameters from audio given the trained AVAM under maximum likelihood (ML) criterion. Extensive objective and subjective evaluations on the JEWEL audio-visual dataset demonstrate that compared with phonemic HMM approaches, facial parameters estimated by our approach follow the true parameters more accurately, and the synthesized facial animation sequences are so lively that 38 of them are undistinguishable",
"We propose a new 3D photo-realistic talking head with a personalized, photo realistic appearance. Different head motions and facial expressions can be freely controlled and rendered. It extends our prior, high-quality, 2D photo-realistic talking head to 3D. Around 20-minutes of audio-visual 2D video are first recorded with read prompted sentences spoken by a speaker. We use a 2D-to-3D reconstruction algorithm to automatically adapt a general 3D head mesh model to the individual. In training, super feature vectors consisting of 3D geometry, texture and speech are formed to train a statistical, multi-streamed, Hidden Markov Model (HMM). The HMM is then used to synthesize both the trajectories of geometry animation and dynamic texture. The 3D talking head animation can be controlled by the rendered geometric trajectory while the facial expressions and articulator movements are rendered with the dynamic 2D image sequences. Head motions and facial expression can also be separately controlled by manipulating corresponding parameters. The new 3D talking head has many useful applications such as voice-agent, tele-presence, gaming, social networking, etc. Index Terms: audio visual synthesis, 3D, photo-realistic, talking head",
"This paper proposes a deep bidirectional long short-term memory approach in modeling the long contextual, nonlinear mapping between audio and visual streams for video-realistic talking head. In training stage, an audio-visual stereo database is firstly recorded as a subject talking to a camera. The audio streams are converted into acoustic feature, i.e. Mel-Frequency Cepstrum Coefficients (MFCCs), and their textual labels are also extracted. The visual streams, in particular, the lower face region, are compactly represented by active appearance model (AAM) parameters by which the shape and texture variations can be jointly modeled. Given pairs of the audio and visual parameter sequence, a DBLSTM model is trained to learn the sequence mapping from audio to visual space. For any unseen speech audio, whether it is original recorded or synthesized by text-to-speech (TTS), the trained DBLSTM model can predict a convincing AAM parameter trajectory for the lower face animation. To further improve the realism of the proposed talking head, the trajectory tiling method is adopted to use the DBLSTM predicted AAM trajectory as a guide to select a smooth real sample image sequence from the recorded database. We then stitch the selected lower face image sequence back to a background face video of the same subject, resulting in a video-realistic talking head. Experimental results show that the proposed DBLSTM approach outperforms the existing HMM-based approach in both objective and subjective evaluations.",
"",
"",
"Lifelike talking faces for interactive services are an exciting new modality for man-machine interactions. Recent developments in speech synthesis and computer animation enable the real-time synthesis of faces that look and behave like real people, opening opportunities to make interactions with computers more like face-to-face conversations. This paper focuses on the technologies for creating lifelike talking heads, illustrating the two main approaches: model-based animations and sample-based animations. The traditional model-based approach uses three-dimensional wire-frame models, which can be animated from high-level parameters such as muscle actions, lip postures, and facial expressions. The sample-based approach, on the other hand, concatenates segments of recorded videos, instead of trying to model the dynamics of the animations in detail. Recent advances in image analysis enable the creation of large databases of mouth and eye images, suited for sample-based animations. The sample-based approach tends to generate more naturally looking animations at the expense of a larger size and less flexibility than the model-based animations. Beside lip articulation, a talking head must show appropriate head movements, in order to appear natural. We illustrate how such \"visual prosody\" is analyzed and added to the animations. Finally, we present four applications where the use of face animation in interactive services results in engaging user interfaces and an increased level of trust between user and machine. Using an RTP-based protocol, face animation can be driven with only 800 bits s in addition to the rate for transmitting audio."
]
} |
1710.00920 | 2762899171 | We present a deep learning framework for real-time speech-driven 3D facial animation from just raw waveforms. Our deep neural network directly maps an input sequence of speech audio to a series of micro facial action unit activations and head rotations to drive a 3D blendshape face model. In particular, our deep model is able to learn the latent representations of time-varying contextual information and affective states within the speech. Hence, our model not only activates appropriate facial action units at inference to depict different utterance generating actions, in the form of lip movements, but also, without any assumption, automatically estimates emotional intensity of the speaker and reproduces her ever-changing affective states by adjusting strength of facial unit activations. For example, in a happy speech, the mouth opens wider than normal, while other facial units are relaxed; or in a surprised state, both eyebrows raise higher. Experiments on a diverse audiovisual corpus of different actors across a wide range of emotional states show interesting and promising results of our approach. Being speaker-independent, our generalized model is readily applicable to various tasks in human-machine interaction and animation. | In recent years, deep neural networks have been successfully applied to speech synthesis @cite_30 @cite_7 and facial animation @cite_33 @cite_34 @cite_32 with superior performance. This is because deep neural networks (DNN) are able to learn the correlation of high-dimensional input data, and, in case of recurrent neural network (RNN), long-term relation, as well as the highly non-linear mapping between input and output features. @cite_19 propose a system using DNN to estimate active appearance model (AAM) coefficients from input phonemes, which can be generalized well to different speeches and languages, and face shapes can be retargeted to drive 3D face models. @cite_13 use long short-term memory (LSTM) RNN to predict 2D lip landmarks from input acoustic features, which are used to synthesize lip movements. @cite_32 use both acoustic and text features to estimate active appearance model AAM coefficients of the mouth area, which then be grafted onto an actual image to produce a photo-realistic talking head. @cite_25 propose a deep convolutional neural network (CNN) that jointly takes audio autocorrelation coefficients and emotional state to output an entire 3D face shape. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_7",
"@cite_32",
"@cite_19",
"@cite_34",
"@cite_13",
"@cite_25"
],
"mid": [
"2134973740",
"2142487393",
"2102003408",
"1779294284",
"2737658251",
"2296650210",
"2738406145",
"2739192055"
],
"abstract": [
"Deep Neural Network (DNN), which can model a long-span, intricate transform compactly with a deep-layered structure, has recently been investigated for parametric TTS synthesis with a fairly large corpus (33,000 utterances) [6]. In this paper, we examine DNN TTS synthesis with a moderate size corpus of 5 hours, which is more commonly used for parametric TTS training. DNN is used to map input text features into output acoustic features (LSP, F0 and V U). Experimental results show that DNN can outperform the conventional HMM, which is trained in ML first and then refined by MGE. Both objective and subjective measures indicate that DNN can synthesize speech better than HMM-based baseline. The improvement is mainly on the prosody, i.e., the RMSE of natural and generated F0 trajectories by DNN is improved by 2 Hz. This benefit is likely from the key characteristics of DNN, which can exploit feature correlations, e.g., between F0 and spectrum, without using a more restricted, e.g. diagonal Gaussian probability family. Our experimental results also show: the layer-wise BP pre-training can drive weights to a better starting point than random initialization and result in a more effective DNN; state boundary info is important for training DNN to yield better synthesized speech; and a hyperbolic tangent activation function in DNN hidden layers yields faster convergence than a sigmoidal one.",
"This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.",
"Conventional approaches to statistical parametric speech synthesis typically use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNN-based systems outperformed the HMM-based systems with similar numbers of parameters.",
"This paper proposes a deep bidirectional long short-term memory approach in modeling the long contextual, nonlinear mapping between audio and visual streams for video-realistic talking head. In training stage, an audio-visual stereo database is firstly recorded as a subject talking to a camera. The audio streams are converted into acoustic feature, i.e. Mel-Frequency Cepstrum Coefficients (MFCCs), and their textual labels are also extracted. The visual streams, in particular, the lower face region, are compactly represented by active appearance model (AAM) parameters by which the shape and texture variations can be jointly modeled. Given pairs of the audio and visual parameter sequence, a DBLSTM model is trained to learn the sequence mapping from audio to visual space. For any unseen speech audio, whether it is original recorded or synthesized by text-to-speech (TTS), the trained DBLSTM model can predict a convincing AAM parameter trajectory for the lower face animation. To further improve the realism of the proposed talking head, the trajectory tiling method is adopted to use the DBLSTM predicted AAM trajectory as a guide to select a smooth real sample image sequence from the recorded database. We then stitch the selected lower face image sequence back to a background face video of the same subject, resulting in a video-realistic talking head. Experimental results show that the proposed DBLSTM approach outperforms the existing HMM-based approach in both objective and subjective evaluations.",
"We introduce a simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. Our approach uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. Our deep learning approach enjoys several attractive properties: it runs in real-time, requires minimal parameter tuning, generalizes well to novel input speech sequences, is easily edited to create stylized and emotional speech, and is compatible with existing animation retargeting approaches. One important focus of our work is to develop an effective approach for speech animation that can be easily integrated into existing production pipelines. We provide a detailed description of our end-to-end approach, including machine learning design decisions. Generalized speech animation results are demonstrated over a wide range of animation clips on a variety of characters and voices, including singing and foreign language input. Our approach can also generate on-demand speech animation in real-time from user speech input.",
"We propose a new photo-realistic, voice driven only (i.e. no linguistic info of the voice input is needed) talking head. The core of the new talking head is a context-dependent, multilayer, Deep Neural Network (DNN), which is discriminatively trained over hundreds of hours, speaker independent speech data. The trained DNN is then used to map acoustic speech input to 9,000 tied “senone” states probabilistically. For each photo-realistic talking head, an HMM-based lips motion synthesizer is trained over the speaker’s audio visual training data where states are statistically mapped to the corresponding lips images. In test, for given speech input, DNN predicts the likely states in their posterior probabilities and photo-realistic lips animation is then rendered through the DNN predicted state lattice. The DNN trained on English, speaker independent data has also been tested with other language input, e.g. Mandarin, Spanish, etc. to mimic the lips movements cross-lingually. Subjective experiments show that lip motions thus rendered for 15 non-English languages are highly synchronized with the audio input and photo-realistic to human eyes perceptually.",
"Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.",
"We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet. We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence."
]
} |
1710.00920 | 2762899171 | We present a deep learning framework for real-time speech-driven 3D facial animation from just raw waveforms. Our deep neural network directly maps an input sequence of speech audio to a series of micro facial action unit activations and head rotations to drive a 3D blendshape face model. In particular, our deep model is able to learn the latent representations of time-varying contextual information and affective states within the speech. Hence, our model not only activates appropriate facial action units at inference to depict different utterance generating actions, in the form of lip movements, but also, without any assumption, automatically estimates emotional intensity of the speaker and reproduces her ever-changing affective states by adjusting strength of facial unit activations. For example, in a happy speech, the mouth opens wider than normal, while other facial units are relaxed; or in a surprised state, both eyebrows raise higher. Experiments on a diverse audiovisual corpus of different actors across a wide range of emotional states show interesting and promising results of our approach. Being speaker-independent, our generalized model is readily applicable to various tasks in human-machine interaction and animation. | In terms of the underlying face model, these approaches can be categorized into image-based @cite_10 @cite_5 @cite_39 @cite_2 @cite_26 @cite_32 and model-based @cite_4 @cite_9 @cite_20 @cite_28 @cite_33 @cite_35 approaches. Image-based approaches compose photo-realistic output by concatenating short clips, or stitch different regions from a sample database together. However, their performance and quality are limited by the amount of samples in the database, thus it is difficult to generalize to a large corpus of speeches, which would require a tremendous amount of image samples to cover all possible facial appearances. In contrast, although lacking in photo-realism, model-based approaches enjoy the flexibility of a deformable model, which is controlled by only a set of parameters, and more straightforward modeling. @cite_8 propose a mapping from acoustic features to blending weights of a blendshape model @cite_0 . This face model allows emotional representation that can be inferred from speech, without explicitly defining the emotion as input, or artificially adding emotion to the face model in postprocessing. Our approach also enjoys the flexibility of blendshape model in 3D face reconstruction from speech. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2114336453",
"2237250383",
"2142487393",
"2745771616",
"219197677",
"2107813907",
"1779294284",
"2120654454",
"2017107803",
"",
"2150133190",
"2147885303",
""
],
"abstract": [
"",
"This paper presents an articulatory modelling approach to convert acoustic speech into realistic mouth animation. We directly model the movements of articulators, such as lips, tongue, and teeth, using a dynamic Bayesian network (DBN)-based audio-visual articulatory model (AVAM). A multiple-stream structure with a shared articulator layer is adopted in the model to synchronously associate the two building blocks of speech, i.e., audio and video. This model not only describes the synchronization between visual articulatory movements and audio speech, but also reflects the linguistic fact that different articulators evolve asynchronously. We also present a Baum-Welch DBN inversion (DBNI) algorithm to generate optimal facial parameters from audio given the trained AVAM under maximum likelihood (ML) criterion. Extensive objective and subjective evaluations on the JEWEL audio-visual dataset demonstrate that compared with phonemic HMM approaches, facial parameters estimated by our approach follow the true parameters more accurately, and the synthesized facial animation sequences are so lively that 38 of them are undistinguishable",
"In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.",
"This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.",
"We introduce a long short-term memory recurrent neural network (LSTM-RNN) approach for real-time facial animation, which automatically estimates head rotation and facial action unit activations of a speaker from just her speech. Specifically, the time-varying contextual non-linear mapping between audio stream and visual facial movements is realized by training a LSTM neural network on a large audio-visual data corpus. In this work, we extract a set of acoustic features from input audio, including Mel-scaled spectrogram, Mel frequency cepstral coefficients and chromagram that can effectively represent both contextual progression and emotional intensity of the speech. Output facial movements are characterized by 3D rotation and blending expression weights of a blendshape model, which can be used directly for animation. Thus, even though our model does not explicitly predict the affective states of the target speaker, her emotional manifestation is recreated via expression weights of the face model. Experiments on an evaluation dataset of different speakers across a wide range of affective states demonstrate promising results of our approach in real-time speech-driven facial animation.",
"This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85 of the time.",
"This paper presents a method for photo-realistic animation of any face shown in a single image or a video. The technique does not require example data of the person’s mouth movements, and the image to be animated is not restricted in pose and illumination. Video reanimation allows for head rotations and speech in the original sequence, yet neither of these motions is required. In order to animate novel faces, the system transfers mouth movements and expressions across individuals, based a common representation of different identities and facial expressions in a vector space of 3D shapes and textures. This space is computed from 3D scans of different neutral faces, and scans of facial expressions. The 3D model’s versatility with respect to pose and illumination is conveyed to photo-realistic image and video processing by a framework of analysis and synthesis algorithms: The system automatically estimates 3D shape, pose and other rendering parameters from single images, and tracks head pose and mouth movements in video. Reanimated with new mouth movements, the 3D face is rendered into the original images.",
"This paper proposes a deep bidirectional long short-term memory approach in modeling the long contextual, nonlinear mapping between audio and visual streams for video-realistic talking head. In training stage, an audio-visual stereo database is firstly recorded as a subject talking to a camera. The audio streams are converted into acoustic feature, i.e. Mel-Frequency Cepstrum Coefficients (MFCCs), and their textual labels are also extracted. The visual streams, in particular, the lower face region, are compactly represented by active appearance model (AAM) parameters by which the shape and texture variations can be jointly modeled. Given pairs of the audio and visual parameter sequence, a DBLSTM model is trained to learn the sequence mapping from audio to visual space. For any unseen speech audio, whether it is original recorded or synthesized by text-to-speech (TTS), the trained DBLSTM model can predict a convincing AAM parameter trajectory for the lower face animation. To further improve the realism of the proposed talking head, the trajectory tiling method is adopted to use the DBLSTM predicted AAM trajectory as a guide to select a smooth real sample image sequence from the recorded database. We then stitch the selected lower face image sequence back to a background face video of the same subject, resulting in a video-realistic talking head. Experimental results show that the proposed DBLSTM approach outperforms the existing HMM-based approach in both objective and subjective evaluations.",
"We describe how to create with machine learning techniques a generative, speech animation module. A human subject is first recorded using a videocamera as he she utters a predetermined speech corpus. After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth uttering entirely novel utterances that were not recorded in the original video. The synthesized utterance is re-composited onto a background sequence which contains natural head and eye movement. The final output is videorealistic in the sense that it looks like a video camera recording of the subject. At run time, the input to the system can be either real audio sequences or synthetic audio produced by a text-to-speech system, as long as they have been phonetically aligned.The two key contributions of this paper are 1) a variant of the multidimensional morphable model (MMM) to synthesize new, previously unseen mouth configurations from a small set of mouth image prototypes; and 2) a trajectory synthesis technique based on regularization, which is automatically trained from the recorded video corpus, and which is capable of synthesizing trajectories in MMM space corresponding to any desired utterance.",
"We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.",
"",
"Lifelike talking faces for interactive services are an exciting new modality for man-machine interactions. Recent developments in speech synthesis and computer animation enable the real-time synthesis of faces that look and behave like real people, opening opportunities to make interactions with computers more like face-to-face conversations. This paper focuses on the technologies for creating lifelike talking heads, illustrating the two main approaches: model-based animations and sample-based animations. The traditional model-based approach uses three-dimensional wire-frame models, which can be animated from high-level parameters such as muscle actions, lip postures, and facial expressions. The sample-based approach, on the other hand, concatenates segments of recorded videos, instead of trying to model the dynamics of the animations in detail. Recent advances in image analysis enable the creation of large databases of mouth and eye images, suited for sample-based animations. The sample-based approach tends to generate more naturally looking animations at the expense of a larger size and less flexibility than the model-based animations. Beside lip articulation, a talking head must show appropriate head movements, in order to appear natural. We illustrate how such \"visual prosody\" is analyzed and added to the animations. Finally, we present four applications where the use of face animation in interactive services results in engaging user interfaces and an increased level of trust between user and machine. Using an RTP-based protocol, face animation can be driven with only 800 bits s in addition to the rate for transmitting audio.",
"Video Rewrite uses existing footage to create automatically new video of a person mouthing words that she did not speak in the original footage. This technique is useful in movie dubbing, for example, where the movie sequence can be modified to sync the actors’ lip motions to the new soundtrack. Video Rewrite automatically labels the phonemes in the training data and in the new audio track. Video Rewrite reorders the mouth images in the training footage to match the phoneme sequence of the new audio track. When particular phonemes are unavailable in the training footage, Video Rewrite selects the closest approximations. The resulting sequence of mouth images is stitched into the background footage. This stitching process automatically corrects for differences in head position and orientation between the mouth images and the background footage. Video Rewrite uses computer-vision techniques to track points on the speaker’s mouth in the training footage, and morphing techniques to combine these mouth gestures into the final video sequence. The new video combines the dynamics of the original actor’s articulations with the mannerisms and setting dictated by the background footage. Video Rewrite is the first facial-animation system to automate all the labeling and assembly tasks required to resync existing footage to a new soundtrack.",
""
]
} |
1710.00920 | 2762899171 | We present a deep learning framework for real-time speech-driven 3D facial animation from just raw waveforms. Our deep neural network directly maps an input sequence of speech audio to a series of micro facial action unit activations and head rotations to drive a 3D blendshape face model. In particular, our deep model is able to learn the latent representations of time-varying contextual information and affective states within the speech. Hence, our model not only activates appropriate facial action units at inference to depict different utterance generating actions, in the form of lip movements, but also, without any assumption, automatically estimates emotional intensity of the speaker and reproduces her ever-changing affective states by adjusting strength of facial unit activations. For example, in a happy speech, the mouth opens wider than normal, while other facial units are relaxed; or in a surprised state, both eyebrows raise higher. Experiments on a diverse audiovisual corpus of different actors across a wide range of emotional states show interesting and promising results of our approach. Being speaker-independent, our generalized model is readily applicable to various tasks in human-machine interaction and animation. | . Convolutional neural networks @cite_14 have achieved great successes in many vision tasks e.g. image classification or segmentation. Their efficient filter design allows deeper network, enables learning features from data directly while being robust to noise and small shift, thus usually having better performance than prior modeling techniques. In recent years, CNNs have been also employed in speech recognition tasks, that directly model the raw waveforms by taking advantage of the locality and translation invariance in time @cite_31 @cite_12 @cite_24 and frequency domain @cite_6 @cite_11 @cite_23 @cite_27 @cite_37 @cite_18 . In this work, we also employ convolutions in the time-frequency domain, and formulate an end-to-end deep neural network that directly maps input waveforms to blendshape weights. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_18",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_31",
"@cite_12",
"@cite_11"
],
"mid": [
"1600744878",
"1538131130",
"",
"2036242736",
"1542280630",
"2033310064",
"",
"2399733683",
"2963175699",
"2155273149"
],
"abstract": [
"Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4–6 relative improvement in WER over an LSTM, the strongest of the three individual models.",
"",
"",
"We develop and present a novel deep convolutional neural network architecture, where heterogeneous pooling is used to provide constrained frequency-shift invariance in the speech spectrogram while minimizing speech-class confusion induced by such invariance. The design of the pooling layer is guided by domain knowledge about how speech classes would change when formant frequencies are modified. The convolution and heterogeneous-pooling layers are followed by a fully connected multi-layer neural network to form a deep architecture interfaced to an HMM for continuous speech recognition. During training, all layers of this entire deep net are regularized using a variant of the “dropout” technique. Experimental evaluation demonstrates the effectiveness of both heterogeneous pooling and dropout regularization. On the TIMIT phonetic recognition task, we have achieved an 18.7 phone error rate, lowest on this standard task reported in the literature with a single system and with no use of information about speaker identity. Preliminary experiments on large vocabulary speech recognition in a voice search task also show error rate reduction using heterogeneous pooling in the deep convolutional neural network.",
"Standard deep neural network-based acoustic models for automatic speech recognition (ASR) rely on hand-engineered input features, typically log-mel filterbank magnitudes. In this paper, we describe a convolutional neural network - deep neural network (CNN-DNN) acoustic model which takes raw multichannel waveforms as input, i.e. without any preceding feature extraction, and learns a similar feature representation through supervised training. By operating directly in the time domain, the network is able to take advantage of the signal's fine time structure that is discarded when computing filterbank magnitude features. This structure is especially useful when analyzing multichannel inputs, where timing differences between input channels can be used to localize a signal in space. The first convolutional layer of the proposed model naturally learns a filterbank that is selective in both frequency and direction of arrival, i.e. a bank of bandpass beamformers with an auditory-like frequency scale. When trained on data corrupted with noise coming from different spatial locations, the network learns to filter them out by steering nulls in the directions corresponding to the noise sources. Experiments on a simulated multichannel dataset show that the proposed acoustic model outperforms a DNN that uses log-mel filterbank magnitude features under noisy and reverberant conditions.",
"Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12 -14 relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.",
"",
"The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of ‘context-aware’ emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"In hybrid hidden Markov model artificial neural networks (HMM ANN) automatic speech recognition (ASR) system, the phoneme class conditional probabilities are estimated by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or and speech production knowledge, and, then modeling the acoustic features with an ANN. Recent advances in machine learning techniques, more specifically in the field of image processing and text processing, have shown that such divide and conquer strategy (i.e., separating feature extraction and modeling steps) may not be necessary. Motivated from these studies, in the framework of convolutional neural networks (CNNs), this paper investigates a novel approach, where the input to the ANN is raw speech signal and the output is phoneme class conditional probability estimates. On TIMIT phoneme recognition task, we study different ANN architectures to show the benefit of CNNs and compare the proposed approach against conventional approach where, spectral-based feature MFCC is extracted and modeled by a multilayer perceptron. Our studies show that the proposed approach can yield comparable or better phoneme recognition performance when compared to the conventional approach. It indicates that CNNs can learn features relevant for phoneme classification automatically from the raw speech signal.",
"Convolutional Neural Networks (CNN) have showed success in achieving translation invariance for many image processing tasks. The success is largely attributed to the use of local filtering and max-pooling in the CNN architecture. In this paper, we propose to apply CNN to speech recognition within the framework of hybrid NN-HMM model. We propose to use local filtering and max-pooling in frequency domain to normalize speaker variance to achieve higher multi-speaker speech recognition performance. In our method, a pair of local filtering layer and max-pooling layer is added at the lowest end of neural network (NN) to normalize spectral variations of speech signals. In our experiments, the proposed CNN architecture is evaluated in a speaker independent speech recognition task using the standard TIMIT data sets. Experimental results show that the proposed CNN method can achieve over 10 relative error reduction in the core TIMIT test sets when comparing with a regular NN using the same number of hidden layers and weights. Our results also show that the best result of the proposed CNN model is better than previously published results on the same TIMIT test sets that use a pre-trained deep NN model."
]
} |
1710.00978 | 2762223719 | Automatic feature learning algorithms are at the forefront of modern day machine learning research. We present a novel algorithm, supervised Q-walk, which applies Q-learning to generate random walks on graphs such that the walks prove to be useful for learning node features suitable for tackling with the node classification problem. We present another novel algorithm, k-hops neighborhood based confidence values learner, which learns confidence values of labels for unlabelled nodes in the network without first learning the node embedding. These confidence values aid in learning an apt reward function for Q-learning. We demonstrate the efficacy of supervised Q-walk approach over existing state-of-the-art random walk based node embedding learners in solving the single multi-label multi-class node classification problem using several real world datasets. Summarising, our approach represents a novel state-of-the-art technique to learn features, for nodes in networks, tailor-made for dealing with the node classification problem. | Graph Analytics field is pacing up due to the growth of large datasets in social network analysis @cite_21 @cite_22 @cite_4 , communication networks @cite_10 @cite_16 , etc. The area of node classification @cite_3 has been approached earlier from different perspectives like factorization based approaches, random walk based approaches, etc. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_3",
"@cite_16",
"@cite_10"
],
"mid": [
"2146591355",
"2432978112",
"2155461593",
"",
"2142517301",
""
],
"abstract": [
"A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins with the premise that a community or a cluster should be thought of as a set of nodes that has more and or better connections between its members than to the remainder of the network. In this paper, we explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks, and we come to several striking conclusions. Rather than defining a procedure to extract sets of nodes from a graph and then attempting to interpret these sets as \"real\" communities, we employ approximation algorithms for the graph-partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be i...",
"The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities.",
"Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. 'circles' on Google+, and 'lists' on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user's network grows. We define a novel machine learning task of identifying users' social circles. We pose the problem as a node clustering problem on a user's ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth.",
"",
"Relations between users on social media sites often reflect a mixture of positive (friendly) and negative (antagonistic) interactions. In contrast to the bulk of research on social networks that has focused almost exclusively on positive interpretations of links between people, we study how the interplay between positive and negative relationships affects the structure of on-line social networks. We connect our analyses to theories of signed networks from social psychology. We find that the classical theory of structural balance tends to capture certain common patterns of interaction, but that it is also at odds with some of the fundamental phenomena we observe --- particularly related to the evolving, directed nature of these on-line networks. We then develop an alternate theory of status that better explains the observed edge signs and provides insights into the underlying social mechanisms. Our work provides one of the first large-scale evaluations of theories of signed networks using on-line datasets, as well as providing a perspective for reasoning about social media sites.",
""
]
} |
1710.00978 | 2762223719 | Automatic feature learning algorithms are at the forefront of modern day machine learning research. We present a novel algorithm, supervised Q-walk, which applies Q-learning to generate random walks on graphs such that the walks prove to be useful for learning node features suitable for tackling with the node classification problem. We present another novel algorithm, k-hops neighborhood based confidence values learner, which learns confidence values of labels for unlabelled nodes in the network without first learning the node embedding. These confidence values aid in learning an apt reward function for Q-learning. We demonstrate the efficacy of supervised Q-walk approach over existing state-of-the-art random walk based node embedding learners in solving the single multi-label multi-class node classification problem using several real world datasets. Summarising, our approach represents a novel state-of-the-art technique to learn features, for nodes in networks, tailor-made for dealing with the node classification problem. | Factorization based techniques represent the edges in networks as matrices. These matrices are factorized to obtain the embeddings. The matrix representation and its factorization are done using various techniques @cite_0 @cite_20 @cite_17 @cite_18 @cite_6 . These methods may suffer from scalability issues for large graph datasets and sparse matrix representations need special attention. | {
"cite_N": [
"@cite_18",
"@cite_6",
"@cite_0",
"@cite_20",
"@cite_17"
],
"mid": [
"2090891622",
"2387462954",
"2053186076",
"2156718197",
"2142535891"
],
"abstract": [
"In this paper, we present GraRep , a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of as well as the skip-gram model with negative sampling of We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.",
"Graph embedding algorithms embed a graph into a vector space where the structure and the inherent properties of the graph are preserved. The existing graph embedding methods cannot preserve the asymmetric transitivity well, which is a critical property of directed graphs. Asymmetric transitivity depicts the correlation among directed edges, that is, if there is a directed path from u to v, then there is likely a directed edge from u to v. Asymmetric transitivity can help in capturing structures of graphs and recovering from partially observed graphs. To tackle this challenge, we propose the idea of preserving asymmetric transitivity by approximating high-order proximity which are based on asymmetric transitivity. In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity. More specifically, we first derive a general formulation that cover multiple popular high-order proximity measurements, then propose a scalable embedding algorithm to approximate the high-order proximity measurements based on their general formulation. Moreover, we provide a theoretical upper bound on the RMSE (Root Mean Squared Error) of the approximation. Our empirical experiments on a synthetic dataset and three real-world datasets demonstrate that HOPE can approximate the high-order proximities significantly better than the state-of-art algorithms and outperform the state-of-art algorithms in tasks of reconstruction, link prediction and vertex recommendation.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models have been proposed to study such graphs, their analysis is still difficult due to the scale and nature of the data. We propose a framework for large-scale graph decomposition and inference. To resolve the scale, our framework is distributed so that the data are partitioned over a shared-nothing set of machines. We propose a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions. Our decomposition is based on a streaming algorithm. It is network-aware as it adapts to the network topology of the underlying computational hardware. We use local copies of the variables and an efficient asynchronous communication protocol to synchronize the replicated values in order to perform most of the computation without having to incur the cost of network communication. On a graph of 200 million vertices and 10 billion edges, derived from an email communication network, our algorithm retains convergence properties while allowing for almost linear scalability in the number of computers."
]
} |
1710.00978 | 2762223719 | Automatic feature learning algorithms are at the forefront of modern day machine learning research. We present a novel algorithm, supervised Q-walk, which applies Q-learning to generate random walks on graphs such that the walks prove to be useful for learning node features suitable for tackling with the node classification problem. We present another novel algorithm, k-hops neighborhood based confidence values learner, which learns confidence values of labels for unlabelled nodes in the network without first learning the node embedding. These confidence values aid in learning an apt reward function for Q-learning. We demonstrate the efficacy of supervised Q-walk approach over existing state-of-the-art random walk based node embedding learners in solving the single multi-label multi-class node classification problem using several real world datasets. Summarising, our approach represents a novel state-of-the-art technique to learn features, for nodes in networks, tailor-made for dealing with the node classification problem. | Random walk based approaches perform random walks on networks to obtain the embeddings. Two popular techniques are DeepWalk @cite_25 and node2vec @cite_8 . node2vec @cite_8 is a semi-supervised algorithmic framework which showcases strategies to perform random walks such that nodes which are homophilic and or structurally equivalent end up getting similar embeddings. The random walks are guided by a heuristic which involves computing distance of the next possible nodes from the previous node given the current node. DeepWalk can be considered as a special case of node2vec with @math and @math where @math are hyperparameters in node2vec which decide the tradeoff between depth-first and breadth-first sampling. | {
"cite_N": [
"@cite_25",
"@cite_8"
],
"mid": [
"2154851992",
"2366141641"
],
"abstract": [
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."
]
} |
1710.00935 | 2951308125 | This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs. | The interpretability and the discrimination power are two important properties of a model @cite_18 . In recent years, different methods are developed to explore the semantics hidden inside a CNN. Many statistical methods @cite_26 @cite_0 @cite_36 have been proposed to analyze CNN features. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_26",
"@cite_36"
],
"mid": [
"2949667497",
"2610018085",
"1673923490",
"1661149683"
],
"abstract": [
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.",
"We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet [18], Places [43], and Oxford VGG [8]. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images."
]
} |
1710.00935 | 2951308125 | This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs. | Visualization of filters in a CNN is the most direct way of exploring the pattern hidden inside a neural unit. @cite_13 @cite_24 @cite_19 showed the appearance that maximized the score of a given unit. up-convolutional nets @cite_29 were used to invert CNN feature maps to images. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_29",
"@cite_13"
],
"mid": [
"2949987032",
"2962851944",
"2273348943",
"2952186574"
],
"abstract": [
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1710.00935 | 2951308125 | This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs. | | | Some studies go beyond passive visualization and actively retrieve certain units from CNNs for different applications. Like the extraction of mid-level features @cite_5 from images, pattern retrieval mainly learns mid-level representations from conv-layers. Zhou @cite_2 @cite_37 selected units from feature maps to describe scenes''. Simon discovered objects from feature maps of unlabeled images @cite_21 , and selected a certain filter to describe each semantic part in a supervised fashion @cite_11 . @cite_38 extracted certain neural units from a filter's feature map to describe an object part in a weakly-supervised manner. @cite_35 used a gradient-based method to interpret visual question-answering models. Studies of @cite_27 @cite_15 @cite_30 @cite_8 selected neural units with specific meanings from CNNs for various applications. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_35",
"@cite_8",
"@cite_21",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"2741266309",
"2954346764",
"2950328304",
"2521737809",
"2739510858",
"2949820118",
"2738171447",
"1899185266",
"2951702175",
"2736351703",
"2949194058"
],
"abstract": [
"This paper addresses the problem of automatically inferring personality traits of people talking to a camera. As in many other computer vision problems, Convolutional Neural Networks (CNN) models have shown impressive results. However, despite of the success in terms of performance, it is unknown what internal representation emerges in the CNN. This paper presents a deep study on understanding why CNN models are performing surprisingly well in this complex problem. We use current techniques on CNN model interpretability, combined with face detection and Action Unit (AUs) recognition systems, to perform our quantitative studies. Our results show that: (1) face provides most of the discriminative information for personality trait inference, and (2) the internal CNN representations mainly analyze key face regions such as eyes, nose, and mouth. Finally, we study the contribution of AUs for personality trait inference, showing the influence of certain AUs in the facial trait judgments.",
"This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding. Given part annotations on very few (e.g., 3-12) objects, our method mines certain latent patterns from the pre-trained CNN and associates them with different semantic parts. We use a four-layer And-Or graph to organize the mined latent patterns, so as to clarify their internal semantic hierarchy. Our method is guided by a small number of part annotations, and it achieves superior performance (about 13 -107 improvement) in part center prediction on the PASCAL VOC and ImageNet datasets.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them",
"Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques -- guided backpropagation and occlusion -- to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.",
"The predictive power of neural networks often costs model interpretability. Several techniques have been developed for explaining model outputs in terms of input features; however, it is difficult to translate such interpretations into actionable insight. Here, we propose a framework to analyze predictions in terms of the model's internal features by inspecting information flow through the network. Given a trained network and a test image, we select neurons by two metrics, both measured over a set of images created by perturbations to the input image: (1) magnitude of the correlation between the neuron activation and the network output and (2) precision of the neuron activation. We show that the former metric selects neurons that exert large influence over the network output while the latter metric selects neurons that activate on generalizable features. By comparing the sets of neurons selected by these two metrics, our framework suggests a way to investigate the internal attention mechanisms of convolutional neural networks.",
"Part models of object categories are essential for challenging recognition tasks, where differences in categories are subtle and only reflected in appearances of small parts of the object. We present an approach that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning. The key idea is to find constellations of neural activation patterns computed using convolutional neural networks. In our experiments, we outperform existing approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford PETS, and Oxford Flowers dataset in case no part or bounding box annotations are available and achieve state-of-the-art performance for the Stanford Dog dataset. We also show the benefits of neural constellation models as a data augmentation technique for fine-tuning. Furthermore, our paper unites the areas of generic and fine-grained classification, since our approach is suitable for both scenarios. The source code of our method is available online at this http URL",
"Recent work has demonstrated the emergence of semantic object-part detectors in activation patterns of convolutional neural networks (CNNs), but did not account for the distributed multi-layer neural activations in such networks. In this work, we propose a novel method to extract distributed patterns of activations from a CNN and show that such patterns correspond to high-level visual attributes. We propose an unsupervised learning module that sits above a pre-trained CNN and learns distributed activation patterns of the network. We utilize elastic non-negative matrix factorization to analyze the responses of a pretrained CNN to an input image and extract salient image regions. The corresponding patterns of neural activations for the extracted salient regions are then clustered via unsupervised deep embedding for clustering (DEC) framework. We demonstrate that these distributed activations contain high-level image features that could be explicitly used for image classification.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.",
"Video blogs (vlogs) are a popular media form for people to present themselves. In case a vlogger would be a job candidate, vlog content can be useful for automatically assessing the candidates traits, as well as potential interviewability. Using a dataset from the CVPR ChaLearn competition, we build a model predicting Big Five personality trait scores and interviewability of vloggers, explicitly targeting explainability of the system output to humans without technical background. We use human-explainable features as input, and a linear model for the systems building blocks. Four multimodal feature representations are constructed to capture facial expression, movement, and linguistic usage. For each, PCA is used for dimensionality reduction and simple linear regression for the predictive model. Our system’s accuracy lies in the middle of the quantitative competition chart, while we can trace back the reasoning behind each score and generate a qualitative analysis report per video.",
"Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called \"part detector discovery\" (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB200-2011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at this http URL and this https URL"
]
} |
1710.00935 | 2951308125 | This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs. | | | Many methods have been developed to diagnose representations of a black-box model. The LIME method proposed by Ribeiro @cite_14 , influence functions @cite_28 and gradient-based visualization methods @cite_25 @cite_20 and @cite_22 extracted image regions that were responsible for each network output, in order to interpret network representations. These methods require people to manually check image regions accountable for the label prediction for each testing image. @cite_31 extracted relationships between representations of various categories from a CNN. Lakkaraju @cite_6 and Zhang @cite_23 explored unknown knowledge of CNNs via active annotations and active question-answering. In contrast, given an interpretable CNN, people can directly identify object parts (filters) that are used for decisions during the inference procedure. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"2282821441",
"2951399320",
"",
"2583689529",
"",
"2749641708",
"2962981568",
"2616247523"
],
"abstract": [
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.",
"",
"Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors or unknown unknowns are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this paper, we formulate and address the problem of informed discovery of unknown unknowns of any given predictive model where unknown unknowns occur due to systematic biases in the training data. We propose a model-agnostic methodology which uses feedback from an oracle to both identify unknown unknowns and to intelligently guide the discovery. We employ a two-phase approach which first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering unknown unknowns across these partitions. We demonstrate the efficacy of our framework by varying the underlying causes of unknown unknowns across various applications. To the best of our knowledge, this paper presents the first algorithmic approach to the problem of discovering unknown unknowns of predictive models.",
"",
"The necessity of depth in efficient neural network learning has led to a family of designs referred to as very deep networks (e.g., GoogLeNet has 22 layers). As the depth increases even further, the need for appropriate tools to explore the space of hidden representations becomes paramount. For instance, beyond the gain in generalization, one may be interested in checking the change in class compositions as additional layers are added. Classical PCA or eigen-spectrum based global approaches do not model the complex inter-class relationships. In this work, we propose a novel decomposition referred to as multiresolution matrix factorization that models hierarchical and compositional structure in symmetric matrices. This new decomposition efficiently infers semantic relationships among deep representations of multiple classes, even when they are not explicitly trained to do so. We show that the proposed factorization is a valuable tool in understanding the landscape of hidden representations, in adapting existing architectures for new tasks and also for designing new architectures using interpretable, human-releatable, class-by-class relationships that we hope the network to learn.",
"As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.",
"We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E."
]
} |
1710.00980 | 2763717489 | In mobile millimeter wave (mmWave) systems, energy is a scarce resource due to the large losses in the channel and high energy usage by analog-to-digital converters (ADC), which scales with bandwidth. In this paper, we consider a communication architecture that integrates the sub-6 GHz and mmWave technologies in 5G cellular systems. In order to mitigate the energy scarcity in mmWave systems, we investigate the rate-optimal and energy-efficient physical layer resource allocation jointly across the sub-6 GHz and mmWave interfaces. First, we formulate an optimization problem in which the objective is to maximize the achievable sum rate under power constraints at the transmitter and receiver. Our formulation explicitly takes into account the energy consumption in integrated-circuit components, and assigns the optimal power and bandwidth across the interfaces. We consider the settings with no channel state information and partial channel state information at the transmitter and under high and low SNR scenarios. Second, we investigate the energy efficiency (EE) defined as the ratio between the amount of data transmitted and the corresponding incurred cost in terms of power. We use fractional programming and Dinkelbach's algorithm to solve the EE optimization problem. Our results prove that despite the availability of huge bandwidths at the mmWave interface, it may be optimal (in terms of achievable sum rate and energy efficiency) to utilize it partially. Moreover, depending on the sub-6 GHz and mmWave channel conditions and total power budget, it may be optimal to activate only one of the interfaces. | Energy efficient transceiver architectures such as the use of low resolution ADCs and hybrid analog digital combining has attracted significant interest. The limits of communications over additive white Gaussian channel with low resolution (1-3 bits) ADCs at the receiver is studied in @cite_22 . The bounds on the capacity of the MIMO channel with 1-bit ADC at high and low SNR regimes are derived in @cite_30 and @cite_0 , respectively. The joint optimization of ADC resolution with the number of antennas in a MIMO channel is studied in @cite_17 . While @cite_5 provides efficient hybrid precoding and combining algorithms for sparse mmWave channels that performs close to full digital solution, @cite_4 combines efficient channel estimation with the hybrid precoding and combining algorithm in @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_0",
"@cite_5",
"@cite_17"
],
"mid": [
"2001991593",
"",
"2098725264",
"2141058125",
"2053521124",
"2402821993"
],
"abstract": [
"Millimeter wave (mmWave) is a viable technology for future cellular systems. With bandwidths on the order of a gigahertz, high-resolution analog-to-digital converters (ADCs) become a power consumption bottleneck. One solution is to employ very low resolution one-bit ADCs. This paper analyzes the flat fading multiple-input multiple-output (MIMO) channel with one-bit ADC. Bounds on the high signal-to-noise ratio (SNR) capacity are derived for the single-input multiple-output (SIMO) channel and the general MIMO channel. The results show how the number of paths, number of transmit antennas, and number of receive antennas impact the capacity at high SNR.",
"",
"As communication systems scale up in speed and bandwidth, the cost and power consumption of high-precision (e.g., 8-12 bits) analog-to-digital conversion (ADC) becomes the limiting factor in modern transceiver architectures based on digital signal processing. In this work, we explore the impact of lowering the precision of the ADC on the performance of the communication link. Specifically, we evaluate the communication limits imposed by low-precision ADC (e.g., 1-3 bits) for transmission over the real discrete-time additive white Gaussian noise (AWGN) channel, under an average power constraint on the input. For an ADC with K quantization bins (i.e., a precision of log2 K bits), we show that the input distribution need not have any more than K+1 mass points to achieve the channel capacity. For 2-bin (1-bit) symmetric quantization, this result is tightened to show that binary antipodal signaling is optimum for any signal-to- noise ratio (SNR). For multi-bit quantization, a dual formulation of the channel capacity problem is used to obtain tight upper bounds on the capacity. The cutting-plane algorithm is employed to compute the capacity numerically, and the results obtained are used to make the following encouraging observations : (a) up to a moderately high SNR of 20 dB, 2-3 bit quantization results in only 10-20 reduction of spectral efficiency compared to unquantized observations, (b) standard equiprobable pulse amplitude modulated input with quantizer thresholds set to implement maximum likelihood hard decisions is asymptotically optimum at high SNR, and works well at low to moderate SNRs as well.",
"We study the performance of multi-input multi-output (MIMO) channels with coarsely quantized outputs in the low signal-to-noise ratio (SNR) regime, where the channel is perfectly known at the receiver. This analysis is of interest in the context of ultra-wideband (UWB) communications from two aspects. First the available power is spread over such a large frequency band, that the power spectral density is extremely low and thus the SNR is low. Second the analog-to-digital converters (ADCs) for such high bandwidth signals should be low-resolution, in order to reduce their cost and power consumption. In this paper we consider the extreme case of only 1-bit ADC for each receive signal component. We compute the mutual information up to second order in the SNR and study the impact of quantization. We show that, up to first order in SNR, the mutual information of the 1-bit quantized system degrades only by a factor of 2 pi compared to the system with infinite resolution independent of the actual MIMO channel realization. With channel state information (CSI) only at receiver, we show that QPSK is, up to the second order, the best among all distributions with independent components. We also elaborate on the ergodic capacity under this scheme in a Rayleigh flat-fading environment.",
"Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications and all cellular systems. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered.",
""
]
} |
1710.00980 | 2763717489 | In mobile millimeter wave (mmWave) systems, energy is a scarce resource due to the large losses in the channel and high energy usage by analog-to-digital converters (ADC), which scales with bandwidth. In this paper, we consider a communication architecture that integrates the sub-6 GHz and mmWave technologies in 5G cellular systems. In order to mitigate the energy scarcity in mmWave systems, we investigate the rate-optimal and energy-efficient physical layer resource allocation jointly across the sub-6 GHz and mmWave interfaces. First, we formulate an optimization problem in which the objective is to maximize the achievable sum rate under power constraints at the transmitter and receiver. Our formulation explicitly takes into account the energy consumption in integrated-circuit components, and assigns the optimal power and bandwidth across the interfaces. We consider the settings with no channel state information and partial channel state information at the transmitter and under high and low SNR scenarios. Second, we investigate the energy efficiency (EE) defined as the ratio between the amount of data transmitted and the corresponding incurred cost in terms of power. We use fractional programming and Dinkelbach's algorithm to solve the EE optimization problem. Our results prove that despite the availability of huge bandwidths at the mmWave interface, it may be optimal (in terms of achievable sum rate and energy efficiency) to utilize it partially. Moreover, depending on the sub-6 GHz and mmWave channel conditions and total power budget, it may be optimal to activate only one of the interfaces. | Although there has been extensive amount of work to optimize the mmWave receivers architecture (e.g., in terms of ADCs), the effect of bandwidth on the mmWave performance has not been fully investigated. To the best of our knowledge, only the authors in @cite_15 have studied the effect of bandwidth on the performance of . Compared with @cite_15 , we consider an in which the transmitter and receiver are power constrained. In this case, an optimal power and bandwidth allocation is derived to maximize the achievable sum rate and energy efficiency. In addition, the authors in @cite_15 consider the number of ADC quantization bits as an optimization parameter, while we assume that the ADC architecture is fixed and the transmit power and bandwidth are optimized for a MIMO architecture. We derive the closed form expressions for the optimal power and bandwidth allocation across the sub-6 GHz and mmWave interfaces. To the best of our knowledge, there is no previous work that investigates the joint effect of bandwidth and transmission power on the performance of integrated sub-6 GHz mmWave systems. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1892643793"
],
"abstract": [
"The wide bandwidth and large number of antennas used in millimeter wave systems put a heavy burden on the power consumption at the receiver. In this paper, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance. Results demonstrate that: (i) For both analog and digital combining, there is a maximum bandwidth beyond which the achievable rate decreases; (ii) Depending on the operating regime of the system, analog combiner may have higher rate but digital combining uses less bandwidth when only ADC power consumption is considered, (iii) digital combining may have higher rate when power consumption of all the components in the receiver front-end are taken into account."
]
} |
1710.00980 | 2763717489 | In mobile millimeter wave (mmWave) systems, energy is a scarce resource due to the large losses in the channel and high energy usage by analog-to-digital converters (ADC), which scales with bandwidth. In this paper, we consider a communication architecture that integrates the sub-6 GHz and mmWave technologies in 5G cellular systems. In order to mitigate the energy scarcity in mmWave systems, we investigate the rate-optimal and energy-efficient physical layer resource allocation jointly across the sub-6 GHz and mmWave interfaces. First, we formulate an optimization problem in which the objective is to maximize the achievable sum rate under power constraints at the transmitter and receiver. Our formulation explicitly takes into account the energy consumption in integrated-circuit components, and assigns the optimal power and bandwidth across the interfaces. We consider the settings with no channel state information and partial channel state information at the transmitter and under high and low SNR scenarios. Second, we investigate the energy efficiency (EE) defined as the ratio between the amount of data transmitted and the corresponding incurred cost in terms of power. We use fractional programming and Dinkelbach's algorithm to solve the EE optimization problem. Our results prove that despite the availability of huge bandwidths at the mmWave interface, it may be optimal (in terms of achievable sum rate and energy efficiency) to utilize it partially. Moreover, depending on the sub-6 GHz and mmWave channel conditions and total power budget, it may be optimal to activate only one of the interfaces. | Beyond the classical mmWave communications and beamforming methods @cite_8 @cite_16 @cite_25 @cite_29 , recently, there have been proposals on leveraging out-of-band information in order to enhance the mmWave performance. The authors in @cite_9 propose a transform method to translate the spatial correlation matrix at the sub-6 GHz band into the correlation matrix of the mmWave channel. The authors in @cite_26 consider the @math GHz indoor WiFi network, and investigate the correlation between the estimated angle-of-arrival (AoA) at the sub-6 GHz band with the mmWave AoA in order to reduce the beam-steering overhead. The authors in @cite_24 propose a compressed beam selection method which is based on out-of-band spatial information obtained at sub-6 GHz band. Our work is distinguished from the above cited works as we investigate the optimal physical layer resource allocation across the sub-6 GHz and mmWave interfaces. In our proposed hybrid sub-6 GHz mmWave architecture, both interfaces can be simultaneously used for data transfer. In @cite_19 , we investigated the problem of optimal load devision and scheduling across the sub-6 GHz and mmWave interfaces. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_24",
"@cite_19",
"@cite_16",
"@cite_25"
],
"mid": [
"1504983838",
"2116334496",
"2598074348",
"2088212435",
"2593150731",
"2734743212",
"2113257905",
"2104074482"
],
"abstract": [
"Millimeter-wave communication achieves multi-Gbps data rates via highly directional beamforming to overcome pathloss and provide the desired SNR. Unfortunately, establishing communication with sufficiently narrow beamwidth to obtain the necessary link budget is a high overhead procedure in which the search space scales with device mobility and the product of the sender-receiver beam resolution. In this paper, we design, implement, and experimentally evaluate Blind Beam Steering (BBS) a novel architecture and algorithm that removes in-band overhead for directional mm-Wave link establishment. Our system architecture couples mm-Wave and legacy 2.4 5 GHz bands using out-of-band direction inference to establish (overhead-free) multi-Gbps mm-Wave communication. Further, BBS evaluates direction estimates retrieved from passively overheard 2.4 5 GHz frames to assure highest mm-Wave link quality on unobstructed direct paths. By removing in-band overhead, we leverage mm-Wave's very high throughput capabilities, beam-width scalability and provide robustness to mobility. We demonstrate that BBS achieves 97.8 accuracy estimating direction between pairing nodes using at least 5 detection band antennas. Further, BBS successfully detects unobstructed direct path conditions with an accuracy of 96.5 and reduces the IEEE 802.11ad beamforming training overhead by 81 .",
"The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.",
"Channel estimation and beam training can be a source of significant overhead in establishing millimeter wave (mmWave) communication links, especially in high mobility applications like connected vehicles. In this paper, we highlight the opportunities and challenges associated with leveraging channel state information acquired at a lower frequency as a form of side information on a higher frequency channel. We focus on the relationship between spatial correlation matrices of sub-6 GHz and mmWave channels. We provide a transform that can be used to relate the spatial correlation matrix derived at one frequency to another much different frequency. We derive an expression for the excess mean squared error and use it to evaluate the performance experienced by using the transformed correlation in mmWave channel estimation.",
"Massive MIMO systems are well-suited for mm-Wave communications, as large arrays can be built with reasonable form factors, and the high array gains enable reasonable coverage even for outdoor communications. One of the main obstacles for using such systems in frequency-division duplex mode, namely, the high overhead for the feedback of channel state information (CSI) to the transmitter, can be mitigated by the recently proposed joint spatial division and multiplexing (JSDM) algorithm. In this paper, we analyze the performance of this algorithm in some realistic propagation channels that take into account the partial overlap of the angular spectra from different users, as well as the sparsity of mm-Wave channels. We formulate the problem of user grouping for two different objectives, namely, maximizing spatial multiplexing and maximizing total received power in a graph-theoretic framework. As the resulting problems are numerically difficult, we proposed (sub optimum) greedy algorithms as efficient solution methods. Numerical examples show that the different algorithms may be superior in different settings. We furthermore develop a new, “degenerate” version of JSDM that only requires average CSI at the transmitter and thus greatly reduces the computational burden. Evaluations in propagation channels obtained from ray tracing results, as well as in measured outdoor channels, show that this low-complexity version performs surprisingly well in mm-Wave channels.",
"Millimeter wave (mmWave) communication is one feasible solution for high data-rate applications like vehicular-to-everything communication and next generation cellular communication. Configuring mmWave links, which can be done through channel estimation or beam-selection, however, is a source of significant overhead. In this paper, we propose to use spatial information extracted at sub-6 GHz to help establish the mmWave link. First, we review the prior work on frequency dependent channel behavior and outline a simulation strategy to generate multi-band frequency dependent channels. Second, assuming: (i) narrowband channels and a fully digital architecture at sub-6 GHz; and (ii) wideband frequency selective channels, OFDM signaling, and an analog architecture at mmWave, we outline strategies to incorporate sub-6 GHz spatial information in mmWave compressed beam selection. We formulate compressed beam-selection as a weighted sparse signal recovery problem, and obtain the weighting information from sub-6 GHz channels. In addition, we outline a structured precoder combiner design to tailor the training to out-of-band information. We also extend the proposed out-of-band aided compressed beam-selection approach to leverage information from all active OFDM subcarriers. The simulation results for achievable rate show that out-of-band aided beam-selection can reduce the training overhead of in-band only beam-selection by 4x.",
"We propose a hybrid architecture that integrates RF (i.e., sub-6 GHz) and millimeter wave (mmWave) technologies for 5G cellular systems. In particular, communications in the mmWave band faces significant challenges due to variable channels, intermittent connectivity, and high energy usage. On the other hand, speeds for electronic processing of data is of the same order as typical rates for mmWave interfaces which makes the use of complex algorithms for tracking channel variations and adjusting resources accordingly impractical. Our proposed architecture integrates the RF and mmWave interfaces for beamforming and data transfer, and exploits the spatio-temporal correlations between the interfaces. Based on extensive experimentation in indoor and outdoor settings, we demonstrate that an integrated RF mmWave signaling and channel estimation scheme can remedy the problem of high energy usage and delay associated with mmWave beamforming. In addition, cooperation between two interfaces at the higher layers effectively addresses the high delays caused by highly intermittent mmWave connectivity. We design a scheduler that fully exploits the mmWave bandwidth, while the RF link acts as a fallback mechanism to prevent high delay. To this end, we formulate an optimal scheduling problem over the RF and mmWave interfaces where the goal is to maximize the delay-constrained throughput of the mmWave interface. We prove using subadditivity analysis that the optimal scheduling policy is based on a single threshold that can be easily adopted despite high link variations.",
"This paper presents propagation measurements in the presence of human activity for a 60 GHz channel. Series of 40-min-long measurements of the channel impulse response have been recorded with a sampling period of 1.6 ms, for a total duration of about 20 h. During measurements, the human activity (between zero and 15 persons) was observed with a video camera. The obstruction phenomenon due to the human bodies is characterized in duration and amplitude from the propagation characteristics (attenuation, coherence bandwidth) by means of an appropriate method. The results highlight and quantify the problems due to the human activity for high data rate communication systems. When the direct path is shadowed by a person, the attenuation generally increases by more than 20 dB, for a median duration of about 100 ms for an activity of one to five persons and 300 ms for 11-15 persons. Globally, the channel is \"unavailable\" for about 1 or 2 of the time in the presence of one to five persons. This channel characterization makes it possible to modelize the temporal variations of the 60 GHz channels. The results also give orientations for the design of high data rate communications systems and networks architectures at 60 GHz.",
"The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage."
]
} |
1710.00962 | 2763695663 | Facial landmarks constitute the most compressed representation of faces and are known to preserve information such as pose, gender and facial structure present in the faces. Several works exist that attempt to perform high-level face-related analysis tasks based on landmarks. In contrast, in this work, an attempt is made to tackle the inverse problem of synthesizing faces from their respective landmarks. The primary aim of this work is to demonstrate that information preserved by landmarks (gender in particular) can be further accentuated by leveraging generative models to synthesize corresponding faces. Though the problem is particularly challenging due to its ill-posed nature, we believe that successful synthesis will enable several applications such as boosting performance of high-level face related tasks using landmark points and performing dataset augmentation. To this end, a novel face-synthesis method known as Gender Preserving Generative Adversarial Network (GP-GAN) that is guided by adversarial loss, perceptual loss and a gender preserving loss is presented. Further, we propose a novel generator sub-network UDeNet for GP-GAN that leverages advantages of U-Net and DenseNet architectures. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed method. | In contrast to landmark detection methods @cite_18 @cite_17 @cite_6 @cite_16 , we focus on the inverse problem of synthesizing or generating faces from landmark keypoints which is a relatively unexplored problem. To this end, recently popular generative models are explored in this work. Among these methods, we specifically study Generative Adversarial Network (GAN) @cite_12 @cite_32 @cite_41 @cite_4 and Variational Auto-encoder (VAE) @cite_24 @cite_31 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_41",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_31",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2521028896",
"2605195953",
"2605287558",
"2258960064",
"1909320841",
"",
"2101866605",
"",
"1976948919"
],
"abstract": [
"",
"We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.",
"We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain @math to a target domain @math in the absence of paired examples. Our goal is to learn a mapping @math such that the distribution of images from @math is indistinguishable from the distribution @math using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping @math and introduce a cycle consistency loss to push @math (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We present an algorithm for extracting key-point descriptors using deep convolutional neural networks (CNN). Unlike many existing deep CNNs, our model computes local features around a given point in an image. We also present a face alignment algorithm based on regression using these local descriptors. The proposed method called Local Deep Descriptor Regression (LDDR) is able to localize face landmarks of varying sizes, poses and occlusions with high accuracy. Deep Descriptors presented in this paper are able to uniquely and efficiently describe every pixel in the image and therefore can potentially replace traditional descriptors such as SIFT and HOG. Extensive evaluations on five publicly available unconstrained face alignment datasets show that our deep descriptor network is able to capture strong local features around a given landmark and performs significantly better than many competitive and state-of-the-art face alignment algorithms.",
"We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.",
"",
"We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.",
"",
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability."
]
} |
1710.00604 | 2963105445 | In order to enable microaerial vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory-optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem or solves it by using an optimistic global planner. We present a conservative trajectory-optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz. | Richter al presented dynamics-aware path planning for MAVs as solving an unconstrained QP through a visibility graph generated by an RRT @cite_17 , which remains a popular method for global planning @cite_23 , but is debatably too slow to replan in real-time. Our previous work @cite_19 combines unconstrained polynomial spline optimization with gradient-based minimization of collision costs from CHOMP @cite_3 , but is prone to local minima. Usenko al utilize a similar concept, but use a B-spline representation instead, and use a circular buffer-based Octomap to overcome the issue of needing a fixed map size @cite_11 . Dong al also use the same general problem structure as CHOMP, but represents trajectories as samples drawn from a Gaussian Process (GP) and optimize the trajectory using factor graphs and probabilistic inference @cite_21 . While all these methods are able to avoid obstacles and replan in real time, none offer convincing ways to overcome the problem of getting stuck in a local minima and being unable to find a feasible solution. | {
"cite_N": [
"@cite_21",
"@cite_17",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_11"
],
"mid": [
"2412669390",
"2482392012",
"2099893201",
"2564322318",
"2214613866",
"2604943047"
],
"abstract": [
"With the increased use of high degree-of-freedom robots that must perform tasks in real-time, there is a need for fast algorithms for motion planning. In this work, we view motion planning from a probabilistic perspective. We consider smooth continuous-time trajectories as samples from a Gaussian process (GP) and formulate the planning problem as probabilistic inference. We use factor graphs and numerical optimization to perform inference quickly, and we show how GP interpolation can further increase the speed of the algorithm. Our framework also allows us to incrementally update the solution of the planning problem to contend with changing conditions. We benchmark our algorithm against several recent trajectory optimization algorithms on planning problems in multiple environments. Our evaluation reveals that our approach is several times faster than previous algorithms while retaining robustness. Finally, we demonstrate the incremental version of our algorithm on replanning problems, and show that it often can find successful solutions in a fraction of the time required to replan from scratch.",
"We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.",
"Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot.",
"Multirotor unmanned aerial vehicles (UAVs) are rapidly gaining popularity for many applications. However, safe operation in partially unknown, unstructured environments remains an open question. In this paper, we present a continuous-time trajectory optimization method for real-time collision avoidance on multirotor UAVs. We then propose a system where this motion planning method is used as a local replanner, that runs at a high rate to continuously recompute safe trajectories as the robot gains information about its environment. We validate our approach by comparing against existing methods and demonstrate the complete system avoiding obstacles on a multirotor UAV platform.",
"In this work, we present an MAV system that is able to relocalize itself, create consistent maps and plan paths in full 3D in previously unknown environments. This is solely based on vision and IMU measurements with all components running onboard and in real-time. We use visual-inertial odometry to keep the MAV airborne safely locally, as well as for exploration of the environment based on high-level input by an operator. A globally consistent map is constructed in the background, which is then used to correct for drift of the visual odometry algorithm. This map serves as an input to our proposed global planner, which finds dynamic 3D paths to any previously visited place in the map, without the use of teach and repeat algorithms. In contrast to previous work, all components are executed onboard and in real-time without any prior knowledge of the environment.",
"In this paper, we present a real-time approach to local trajectory replanning for microaerial vehicles (MAVs). Current trajectory generation methods for multicopters achieve high success rates in cluttered environments, but assume that the environment is static and require prior knowledge of the map. In the presented study, we use the results of such planners and extend them with a local replanning algorithm that can handle unmodeled (possibly dynamic) obstacles while keeping the MAV close to the global trajectory. To ensure that the proposed approach is real-time capable, we maintain information about the environment around the MAV in an occupancy grid stored in a three-dimensional circular buffer, which moves together with a drone, and represent the trajectories by using uniform B-splines. This representation ensures that the trajectory is sufficiently smooth and simultaneously allows for efficient optimization."
]
} |
1710.00604 | 2963105445 | In order to enable microaerial vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory-optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem or solves it by using an optimistic global planner. We present a conservative trajectory-optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz. | Pivtoraiko al use graph search with motion primitives to replan online @cite_1 . However, they use an optimistic local planner: unknown space is considered traversible, and while this helps escape local minima, it is fundamentally unsafe. Chen al plan online by building a sparse graph by inflating unoccupied corridors within an Octomap, then optimize an unconstrained QP to get a polynomial path @cite_24 . However, they only use 2D sensing and treat unknown space as free, again leading to potentially unsafe paths in very cluttered environments. | {
"cite_N": [
"@cite_24",
"@cite_1"
],
"mid": [
"2414314951",
"2069647917"
],
"abstract": [
"We present an online method for generating collision-free trajectories for autonomous quadrotor flight through cluttered environments. We consider the real-world scenario that the quadrotor aerial robot is equipped with limited sensing and operates in initially unknown environments. During flight, an octree-based environment representation is incrementally built using onboard sensors. Utilizing efficient operations in the octree data structure, we are able to generate free-space flight corridors consisting of large overlapping 3-D grids in an online fashion. A novel optimization-based method then generates smooth trajectories that both are bounded entirely within the safe flight corridor and satisfy higher order dynamical constraints. Our method computes valid trajectories within fractions of a second on a moderately fast computer, thus permitting online re-generation of trajectories for reaction to new obstacles. We build a complete quadrotor testbed with onboard sensing, state estimation, mapping, and control, and integrate the proposed method to show online navigation through complex unknown environments.",
"This paper describes an approach to motion generation for quadrotor micro-UAV's navigating cluttered and partially known environments. We pursue a graph search method that, despite the high dimensionality of the problem, the complex dynamics of the system and the continuously changing environment model is capable of generating dynamically feasible motions in real-time. This is enabled by leveraging the differential flatness property of the system and by developing a structured search space based on state lattice motion primitives. We suggest a greedy algorithm to generate these primitives off-line automatically, given the robot's motion model. The process samples the reachability of the system and reduces it to a set of representative, canonical motions that are compatible with the state lattice structure, which guarantees that any incremental replanning algorithm is able to produce smooth dynamically feasible motion plans while reusing previous computation between replans. Simulated and physical experimental results demonstrate real-time replanning due to the inevitable and frequent world model updates during micro-UAV motion in partially known environments."
]
} |
1710.00604 | 2963105445 | In order to enable microaerial vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory-optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem or solves it by using an optimistic global planner. We present a conservative trajectory-optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz. | The goal of exploration literature is not only to stay safe and avoid collisions, but to maximize the amount of information about the environment. There are many different approaches, such as greedily tracking the closest unexplored frontier @cite_16 or simulating gas-like particles throughout the environment to find the sparsest area of dispersion to explore @cite_6 . | {
"cite_N": [
"@cite_16",
"@cite_6"
],
"mid": [
"1942294243",
"1996985406"
],
"abstract": [
"Cameras are a natural fit for micro aerial vehicles MAVs due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAV's limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAV's state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.",
"In this paper, we propose a stochastic differential equation-based exploration algorithm to enable exploration in three-dimensional indoor environments with a payload constrained micro-aerial vehicle (MAV). We are able to address computation, memory, and sensor limitations by considering only the known occupied space in the current map. We determine regions for further exploration based on the evolution of a stochastic differential equation that simulates the expansion of a system of particles with Newtonian dynamics. The regions of most significant particle expansion correlate to unexplored space. After identifying and processing these regions, the autonomous MAV navigates to these locations to enable fully autonomous exploration. The performance of the approach is demonstrated through numerical simulations and experimental results in single and multi-floor indoor experiments."
]
} |
1710.00604 | 2963105445 | In order to enable microaerial vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory-optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem or solves it by using an optimistic global planner. We present a conservative trajectory-optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz. | Rather than tracking frontiers, some methods instead aim to maximize information gain. Charrow al optimize this gain over a state lattice with motion primitives as connecting edges, and then improve the plan with trajectory optimization @cite_10 . Bircher al instead build an RRT tree in the unexplored space, and execute a straight-line plan to the first vertex of the most promising branch of the tree, maximizing the number of unknown voxels falling into the sensor frustum @cite_14 . Papachristos al extend Bircher's method by also optimizing the intermediate paths to maximize localization quality @cite_20 . Similarly, Davis al optimize paths between next-best views to maximize coverage by introducing a coverage term to their iLQG formulation @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_10",
"@cite_20"
],
"mid": [
"2295470962",
"2409009991",
"2283534560",
"2739036405"
],
"abstract": [
"We introduce a new problem of continuous, coverage-aware trajectory optimization under localization and sensing uncertainty. In this problem, the goal is to plan a path from a start state to a goal state that maximizes the coverage of a user-specified region while minimizing the control costs of the robot and the probability of collision with the environment. We present a principled method for quantifying the coverage sensing uncertainty of the robot. We use this sensing uncertainty along with the uncertainty in robot localization to develop C-OPT, a coverage-optimization algorithm which optimizes trajectories over belief-space to find locally optimal coverage paths. We highlight the applicability of our approach in multiple simulated scenarios inspired by surveillance, UAV crop analysis, and search-and-rescue tasks. We also present a case study on a physical, differential-drive robot. We also provide quantitative and qualitative analysis of the paths generated by our approach.",
"This paper presents a novel path planning algorithm for the autonomous exploration of unknown space using aerial robotic platforms. The proposed planner employs a receding horizon “next-best-view” scheme: In an online computed random tree it finds the best branch, the quality of which is determined by the amount of unmapped space that can be explored. Only the first edge of this branch is executed at every planning step, while repetition of this procedure leads to complete exploration results. The proposed planner is capable of running online, onboard a robot with limited resources. Its high performance is evaluated in detailed simulation studies as well as in a challenging real world experiment using a rotorcraft micro aerial vehicle. Analysis on the computational complexity of the algorithm is provided and its good scaling properties enable the handling of large scale and complex problem setups.",
"We propose an information-theoretic planning approach that enables mobile robots to autonomously construct dense 3D maps in a computationally efficient manner. Inspired by prior work, we accomplish this task by formulating an information-theoretic objective function based on CauchySchwarz quadratic mutual information (CSQMI) that guides robots to obtain measurements in uncertain regions of the map. We then contribute a two stage approach for active mapping. First, we generate a candidate set of trajectories using a combination of global planning and generation of local motion primitives. From this set, we choose a trajectory that maximizes the information-theoretic objective. Second, we employ a gradientbased trajectory optimization technique to locally refine the chosen trajectory such that the CSQMI objective is maximized while satisfying the robot’s motion constraints. We evaluated our approach through a series of simulations and experiments on a ground robot and an aerial robot mapping unknown 3D environments. Real-world experiments suggest our approach reduces the time to explore an environment by 70 compared to a closest frontier exploration strategy and 57 compared to an information-based strategy that uses global planning, while simulations demonstrate the approach extends to aerial robots with higher-dimensional state.",
"This paper presents a novel path planning algorithm for autonomous, uncertainty-aware exploration and mapping of unknown environments using aerial robots. The proposed planner follows a two-step, receding horizon, belief space-based approach. At first, in an online computed tree the algorithm finds the branch that optimizes the amount of space expected to be explored. The first viewpoint configuration of this branch is selected, but the path towards it is decided through a second planning step. Within that, a new tree is sampled, admissible branches arriving at the reference viewpoint are found and the robot belief about its state and the tracked landmarks of the environment is propagated. The branch that minimizes the expected localization and mapping uncertainty is selected, the corresponding path is executed by the robot and the whole process is iteratively repeated. The proposed planner is capable of running online onboard a small aerial robot and its performance is evaluated using experimental studies in a challenging environment."
]
} |
1710.00668 | 2763619719 | We study the Steiner Tree problem, in which a set of terminal vertices needs to be connected in the cheapest possible way in an edge-weighted graph. This problem has been extensively studied from the viewpoint of approximation and also parameterization. In particular, on one hand Steiner Tree is known to be APX-hard, and W[2]-hard on the other, if parameterized by the number of non-terminals (Steiner vertices) in the optimum solution. In contrast to this we give an efficient parameterized approximation scheme (EPAS), which circumvents both hardness results. Moreover, our methods imply the existence of a polynomial size approximate kernelization scheme (PSAKS) for the assumed parameter. We further study the parameterized approximability of other variants of Steiner Tree, such as Directed Steiner Tree and Steiner Forest. For neither of these an EPAS is likely to exist for the studied parameter: for Steiner Forest an easy observation shows that the problem is APX-hard, even if the input graph contains no Steiner vertices. For Directed Steiner Tree we prove that computing a constant approximation for this parameter is W[1]-hard. Nevertheless, we show that an EPAS exists for Unweighted Directed Steiner Tree. Also we prove that there is an EPAS and a PSAKS for Steiner Forest if in addition to the number of Steiner vertices, the number of connected components of an optimal solution is considered to be a parameter. | For the problem it is a long standing open problem whether a polylogarithmic approximation can be computed in polynomial time. It is known that an @math -approximation can be computed in polynomial time @cite_8 , and an @math -approximation in quasi-polynomial time @cite_8 . consider the problem, which is the directed variant of (i.e. a generalization of ). They give a dichotomy result, proving that the problem parameterized by @math is whenever the terminal pairs induce a graph that is a caterpillar with a constant number of additional edges, and otherwise the problem is 1 -hard. Among the 1 -hard cases is the problem (for which the hardness was originally established by ), in which all terminals need to be strongly connected. For this problem a @math -approximation is obtainable @cite_20 when parametrizing by @math , and a recent result shows that this is best possible @cite_27 under the Gap Exponential Time Hypothesis. | {
"cite_N": [
"@cite_27",
"@cite_20",
"@cite_8"
],
"mid": [
"2532977679",
"2949062889",
""
],
"abstract": [
"Given a directed graph @math and a list @math of terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of @math that contains a directed @math path for every @math . The special case Directed Steiner Tree (when we ask for paths from a root @math to terminals @math ) is known to be fixed-parameter tractable parameterized by the number of terminals, while the special case Strongly Connected Steiner Subgraph (when we ask for a path from every @math to every other @math ) is known to be W[1]-hard. We systematically explore the complexity landscape of directed Steiner problems to fully understand which other special cases are FPT or W[1]-hard. Formally, if @math is a class of directed graphs, then we look at the special case of Directed Steiner Network where the list @math of requests form a directed graph that is a member of @math . Our main result is a complete characterization of the classes @math resulting in fixed-parameter tractable special cases: we show that if every pattern in @math has the combinatorial property of being \"transitively equivalent to a bounded-length caterpillar with a bounded number of extra edges,\" then the problem is FPT, and it is W[1]-hard for every recursively enumerable @math not having this property. This complete dichotomy unifies and generalizes the known results showing that Directed Steiner Tree is FPT [Dreyfus and Wagner, Networks 1971], @math -Root Steiner Tree is FPT for constant @math [Such 'y, WG 2016], Strongly Connected Steiner Subgraph is W[1]-hard [, SIAM J. Discrete Math. 2011], and Directed Steiner Network is solvable in polynomial-time for constant number of terminals [Feldman and Ruhl, SIAM J. Comput. 2006], and moreover reveals a large continent of tractable cases that were not known before.",
"A Fixed-Parameter Tractable ( ) @math -approximation algorithm for a minimization (resp. maximization) parameterized problem @math is an FPT algorithm that, given an instance @math computes a solution of cost at most @math (resp. @math ) if a solution of cost at most (resp. at least) @math exists; otherwise the output can be arbitrary. For well-known intractable problems such as the W[1]-hard Clique and W[2]-hard Set Cover problems, the natural question is whether we can get any -approximation. It is widely believed that both Clique and Set-Cover admit no FPT @math -approximation algorithm, for any increasing function @math . Assuming standard conjectures such as the Exponential Time Hypothesis (ETH) eth-paturi and the Projection Games Conjecture (PGC) r3 , we make the first progress towards proving this conjecture by showing that 1. Under the ETH and PGC, there exist constants @math such that the Set Cover problem does not admit an FPT approximation algorithm with ratio @math in @math time, where @math is the size of the universe and @math is the number of sets. 2. Unless @math , for every @math there exists a constant @math such that Clique has no FPT cost approximation with ratio @math in @math time, where @math is the number of vertices in the graph. In the second part of the paper we consider various W[1]-hard problems such as , , Directed Steiner Network and . For all these problem we give polynomial time @math -approximation algorithms for some small function @math (the largest approximation ratio we give is @math ).",
""
]
} |
1710.00668 | 2763619719 | We study the Steiner Tree problem, in which a set of terminal vertices needs to be connected in the cheapest possible way in an edge-weighted graph. This problem has been extensively studied from the viewpoint of approximation and also parameterization. In particular, on one hand Steiner Tree is known to be APX-hard, and W[2]-hard on the other, if parameterized by the number of non-terminals (Steiner vertices) in the optimum solution. In contrast to this we give an efficient parameterized approximation scheme (EPAS), which circumvents both hardness results. Moreover, our methods imply the existence of a polynomial size approximate kernelization scheme (PSAKS) for the assumed parameter. We further study the parameterized approximability of other variants of Steiner Tree, such as Directed Steiner Tree and Steiner Forest. For neither of these an EPAS is likely to exist for the studied parameter: for Steiner Forest an easy observation shows that the problem is APX-hard, even if the input graph contains no Steiner vertices. For Directed Steiner Tree we prove that computing a constant approximation for this parameter is W[1]-hard. Nevertheless, we show that an EPAS exists for Unweighted Directed Steiner Tree. Also we prove that there is an EPAS and a PSAKS for Steiner Forest if in addition to the number of Steiner vertices, the number of connected components of an optimal solution is considered to be a parameter. | In the same paper, also consider the problem, which is the directed variant of on input graphs, i.e., directed graphs in which for every edge @math the reverse edge @math exists as well and has the same cost. These graphs model inputs that lie between the undirected and directed settings. From thm:ST,thm:PSAKS-SF , it is not hard to see that the problem (i.e. on bidirected inputs) has both an and a for our parameter @math , by reducing the problem to the undirected setting. Since the for parameter @math follows from the for parameter @math given by , it is interesting to note that for parameter @math , provide both a and a parameterized approximation scheme for the problem whenever the optimum solution is planar. This is achieved by generalizing the Theorem to this setting. As this is a generalization of , it is natural to ask whether corresponding algorithms also exist for our parameter @math in the more general setting considered in @cite_27 . | {
"cite_N": [
"@cite_27"
],
"mid": [
"2532977679"
],
"abstract": [
"Given a directed graph @math and a list @math of terminal pairs, the Directed Steiner Network problem asks for a minimum-cost subgraph of @math that contains a directed @math path for every @math . The special case Directed Steiner Tree (when we ask for paths from a root @math to terminals @math ) is known to be fixed-parameter tractable parameterized by the number of terminals, while the special case Strongly Connected Steiner Subgraph (when we ask for a path from every @math to every other @math ) is known to be W[1]-hard. We systematically explore the complexity landscape of directed Steiner problems to fully understand which other special cases are FPT or W[1]-hard. Formally, if @math is a class of directed graphs, then we look at the special case of Directed Steiner Network where the list @math of requests form a directed graph that is a member of @math . Our main result is a complete characterization of the classes @math resulting in fixed-parameter tractable special cases: we show that if every pattern in @math has the combinatorial property of being \"transitively equivalent to a bounded-length caterpillar with a bounded number of extra edges,\" then the problem is FPT, and it is W[1]-hard for every recursively enumerable @math not having this property. This complete dichotomy unifies and generalizes the known results showing that Directed Steiner Tree is FPT [Dreyfus and Wagner, Networks 1971], @math -Root Steiner Tree is FPT for constant @math [Such 'y, WG 2016], Strongly Connected Steiner Subgraph is W[1]-hard [, SIAM J. Discrete Math. 2011], and Directed Steiner Network is solvable in polynomial-time for constant number of terminals [Feldman and Ruhl, SIAM J. Comput. 2006], and moreover reveals a large continent of tractable cases that were not known before."
]
} |
1710.00517 | 2757961851 | One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted. | Temporally coded light blinking faster than the sensor rate is also effective to increase the temporal information in a video with a limited frame rate @cite_5 . In this case we can use ordinary imaging devices, but from the viewpoint of the sampling scheme of the redundant spatio-temporal array, it is far from optimal since all pixels are sampled simultaneously. In other words, the effect of homogeneous blinking light has similarity to the technique of coded exposure @cite_9 so it is difficult to recover motion picture from a single input image as with Hitomi's method @cite_28 using pixel-wise individual exposure coding. Also blinking illumination is not versatile in daily-use cameras. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_9"
],
"mid": [
"2151364185",
"2135155393",
"2161008509"
],
"abstract": [
"Cameras face a fundamental tradeoff between the spatial and temporal resolution - digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical constraints on sampling scheme which is imposed by architectures of present image sensor devices. Consequently, our sampling scheme can be implemented on image sensors by making a straightforward modification to the control unit. To demonstrate the power of our approach, we have implemented a prototype imaging system with per-pixel coded exposure control using a liquid crystal on silicon (LCoS) device. Using both simulations and experiments on a wide range of scenes, we show that our method can effectively reconstruct a video from a single image maintaining high spatial resolution.",
"We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.",
"In a conventional single-exposure photograph, moving objects or moving cameras cause motion blur. The exposure time defines a temporal box filter that smears the moving object across the image by convolution. This box filter destroys important high-frequency spatial details so that deblurring via deconvolution becomes an ill-posed problem.Rather than leaving the shutter open for the entire exposure duration, we \"flutter\" the camera's shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motion-blur removal including extremely large motions, textured backgrounds and partial occluders."
]
} |
1710.00517 | 2757961851 | One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted. | Contrary to the existing work listed above, our method of temporal super-resolution of 3D shapes is not only efficient in terms of sampling scheme and minimum cost for ordinary imaging devices, but is also natural in a shape-measuring context, because the method of projecting artificial light onto the object is not eccentric for active depth measurement @cite_27 @cite_17 . The proposed pattern of projected light is encoded spatially and temporally to maximize the exploitation of the motion information of the moving shape. | {
"cite_N": [
"@cite_27",
"@cite_17"
],
"mid": [
"2129731399",
"2048273466"
],
"abstract": [
"3D scanning of moving objects has many applications, for example, marker-less motion capture, analysis on fluid dynamics, object explosion and so on. One of the approach to acquire accurate shape is a projector-camera system, especially the methods that reconstructs a shape by using a single image with static pattern is suitable for capturing fast moving object. In this paper, we propose a method that uses a grid pattern consisting of sets of parallel lines. The pattern is spatially encoded by a periodic color pattern. While informations are sparse in the camera image, the proposed method extracts the dense (pixel-wise) phase informations from the sparse pattern. As the result, continuous regions in the camera images can be extracted by analyzing the phase. Since there remain one DOF for each region, we propose the linear solution to eliminate the DOF by using geometric informations of the devices, i.e. epipolar constraint. In addition, solution space is finite because projected pattern consists of parallel lines with same intervals, the linear equation can be efficiently solved by integer least square method. In this paper, the formulations for both single and multiple projectors are presented. We evaluated the accuracy of correspondences and showed the comparison with respect to the number of projectors by simulation. Finally, the dense 3D reconstruction of moving objects are presented in the experiments.",
"In this paper we present a new “one-shot” method to reconstruct the shape of dynamic 3D objects and scenes based on active illumination. In common with other related prior-art methods, a static grid pattern is projected onto the scene, a video sequence of the illuminated scene is captured, a shape estimate is produced independently for each video frame, and the one-shot property is realized at the expense of space resolution. The main challenge in grid-based one-shot methods is to engineer the pattern and algorithms so that the correspondence between pattern grid points and their images can be established very fast and without uncertainty. We present an efficient one-shot method which exploits simple geometric constraints to solve the correspondence problem. We also introduce De Bruijn spaced grids, a novel grid pattern, and show with strong empirical data that the resulting scheme is much more robust compared to those based on uniform spaced grids."
]
} |
1710.00517 | 2757961851 | One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted. | There are several papers, which use multiple patterns to reconstruct a moving object @cite_2 @cite_13 , however, they capture each pattern in individual frames and do not capture multiple exposure of patterns into a single frame, it is difficult for them to achieve reconstruction of faster motion than camera fps nor temporal super-resolution. | {
"cite_N": [
"@cite_13",
"@cite_2"
],
"mid": [
"2131563332",
"124515947"
],
"abstract": [
"We present a novel 3D scanning system combining stereo and active illumination based on phase-shift for robust and accurate scene reconstruction. Stereo overcomes the traditional phase discontinuity problem and allows for the reconstruction of complex scenes containing multiple objects. Due to the sequential recording of three patterns, motion will introduce artifacts in the reconstruction. We develop a closed-form expression for the motion error in order to apply motion compensation on a pixel level. The resulting scanning system can capture accurate depth maps of complex dynamic scenes at 17 fps and can cope with both rigid and deformable objects.",
"Single-shot structured light methods allow 3D reconstruction of dynamic scenes. However, such methods lose spatial resolution and perform poorly around depth discontinuities. Previous single-shot methods project the same pattern repeatedly; thereby spatial resolution is reduced even if the scene is static or has slowly moving parts. We present a structured light system using a sequence of shifted stripe patterns that is decodable both spatially and temporally. By default, our method allows single-shot 3D reconstruction with any of our projected patterns by using spatial windows. Moreover, the sequence is designed so as to progressively improve the reconstruction quality around depth discontinuities by using temporal windows. Our method enables motion-aware reconstruction for each pixel: The best spatio-temporal window is automatically selected depending on the scene structure, motion, and the number of available images. This significantly reduces the number of pixels around discontinuities where depth cannot be recovered in traditional approaches. Our decoding scheme extends the adaptive window matching commonly used in stereo by incorporating temporal windows with 1D spatial windows. We demonstrate the advantages of our approach for a variety of scenarios including thin structures, dynamic scenes, and scenes containing both static and dynamic regions."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | The simplest form of motion planning is a geometric problem devoted to compute a collision-free path from a start to a goal state in the configuration space while satisfying some geometric constraints like joint limits and collision avoidance. Sampling-based motion planners such as RRTs and PRMs @cite_25 are able to solve problems in high dimensional configuration spaces, by connecting collision-free samples forming a graph or a tree-like structure that capture the connectivity of the free configuration space, or of the part of the free space relevant to the query to be solved. In some cases the kinematic and dynamic constraints of the robot must be taken into account while planning due to the difficulty that may arise in the following of a geometric path. This need gave rise to kinodynamic motion planners. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2128990851"
],
"abstract": [
"A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collision-free configurations and whose edges correspond to feasible paths between these configurations. These paths are computed using a simple and fast local planner. In the query phase, any given start and goal configurations of the robot are connected to two nodes of the roadmap; the roadmap is then searched for a path joining these two nodes. The method is general and easy to implement. It can be applied to virtually any type of holonomic robot. It requires selecting certain parameters (e.g., the duration of the learning phase) whose values depend on the scene, that is the robot and its workspace. But these values turn out to be relatively easy to choose, Increased efficiency can also be achieved by tailoring some components of the method (e.g., the local planner) to the considered robots. In this paper the method is applied to planar articulated robots with many degrees of freedom. Experimental results show that path planning can be done in a fraction of a second on a contemporary workstation ( spl ap 150 MIPS), after learning for relatively short periods of time (a few dozen seconds)."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | Sampling-based motion planners (particularly those using tree-like structures) have the ability to efficiently plan in the presence of kinodynamic constraints @cite_26 @cite_28 . These planners can be divided into three main categories: Planners that sample the states, such as RRTs and Expansive-Spaces Tree planners (EST) @cite_21 @cite_19 . The RRT grows a tree rooted at the start state by iteratively selecting a random sample @math and expanding the tree from the node that is nearest to @math by applying a randomly sampled control. The EST builds a tree-like roadmap by selecting a node with a probability inversely proportional to the density of the node neighborhood and extending it by applying a randomly sampled control. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_26",
"@cite_21"
],
"mid": [
"",
"2171266831",
"2031310371",
"2036016432"
],
"abstract": [
"",
"We introduce the notion of expansiveness to characterize a family of robot configuration spaces whose connectivity can be effectively captured by a roadmap of randomly-sampled milestones. The analysis of expansive configuration spaces has inspired us to develop a new randomized planning algorithm. This algorithm tries to sample only the portion of the configuration space that is relevant to the current query, avoiding the cost of precomputing a roadmap for the entire configuration space. Thus, it is well-suited for problems where a single query is submitted for a given environment. The algorithm has been implemented and successfully applied to complex assembly maintainability problems from the automotive industry.",
"Sampling demonstrated to be the algorithmic key to efficiently solve many high dimensional motion planning problems. Information on the configuration space is acquired by generating samples and edges between them, which are stored in a suitable data structure. Following this paradigm, many different algorithmic techniques have been proposed, and some of them are now widely accepted as part of the standard literature in the field. The paper reviews some of the most influential proposals and ideas, providing indications on their practical and theoretical implications.",
"This paper presents a novel randomized motion planner for robots that must achieve a specified goal under kinematic and or dynamic motion constraints while avoiding collision with moving obstacles with known trajectories. The planner encodes the motion constraints on the robot with a control system and samples the robot's state × time space by picking control inputs at random and integrating its equations of motion. The result is a probabilistic roadmap of sampled state ×time points, called milestones, connected by short admissible trajectories. The planner does not precompute the roadmap; instead, for each planning query, it generates a new roadmap to connect an initial and a goal state×time point. The paper presents a detailed analysis of the planner's convergence rate. It shows that, if the state×time space satisfies a geometric property called expansiveness, then a slightly idealized version of our implemented planner is guaranteed to find a trajectory when one exists, with probability quickly converg..."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | Hybrid planners such as Synergistic Combination of Layers of Planning (SyCLoP) @cite_12 and the Linear Temporal Logic (LTL) motion planner @cite_22 @cite_13 . The SyCLoP planner splits the planning problem into a discrete (high-level) layer and a continuous (low-level) layer of planning. The former is based on the decomposition of the workspace, whereas the latter consists of a sampling-based motion planner like EST or RRT that is guided by the discrete layer. LTL is an extension of the SyCLoP planer in which the discrete layer encodes a complex motion planning task using an abstract graph computed from a decomposition of the workspace and an automaton that represents a linear temporal logic formula describing the task. | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_12"
],
"mid": [
"2233446961",
"1970927851",
"2156123119"
],
"abstract": [
"Enabling robots to accomplish sophisticated tasks requires enhancing their capability to plan at multiple levels of discrete and continuous abstractions. Toward this goal, the proposed approach couples the ability of sampling-based motion planning to handle the complexity arising from high-dimensional robotic systems, nonlinear dynamics, and collision avoidance with the ability of discrete planning to handle discrete specifications. The approach makes it possible to specify tasks via Linear Temporal Logic (LTL) and automatically computes collision-free and dynamically-feasible motions that enable the robot to carry out assigned tasks. While discrete planning guides sampling-based motion planning, the latter feeds back information to further refine the guide and advance the search. Sampling is also used in the discrete space to shorten the length of the discrete plans and to expand the search toward new discrete states. Experiments with high-dimensional dynamical robot models performing various LTL tasks show significant computational speedups over related work.",
"This article describes approach for solving motion planning problems for mobile robots involving temporal goals. Traditional motion planning for mobile robotic systems involves the construction of a motion plan that takes the system from an initial state to a set of goal states while avoiding collisions with obstacles at all times. The motion plan is also required to respect the dynamics of the system that are typically described by a set of differential equations. A wide variety of techniques have been pro posed over the last two decades to solve such problems [1], [2].",
"To efficiently solve challenges related to motion-planning problems with dynamics, this paper proposes treating motion planning not just as a search problem in a continuous space but as a search problem in a hybrid space consisting of discrete and continuous components. A multilayered framework is presented which combines discrete search and sampling-based motion planning. This framework is called synergistic combination of layers of planning ( SyCLoP) hereafter. Discrete search uses a workspace decomposition to compute leads, i.e., sequences of regions in the neighborhood that guide sampling-based motion planning during the state-space exploration. In return, information gathered by motion planning, such as progress made, is fed back to the discrete search. This combination allows SyCLoP to identify new directions to lead the exploration toward the goal, making it possible to efficiently find solutions, even when other planners get stuck. Simulation experiments with dynamical models of ground and flying vehicles demonstrate that the combination of discrete search and motion planning in SyCLoP offers significant advantages. In fact, speedups of up to two orders of magnitude were obtained for all the sampling-based motion planners used as the continuous layer of SyCLoP."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | In all the above stated planners the control sampling range is usually set at the start and remains the same in the entire planning process; on each state the controls are randomly sampled from the given control range that results in the robot motion. Beside sampling-based algorithms there are some other recently proposed approaches for kinodynamic motion planning, such as the Covariant Hamiltonian Optimization for Motion Planning (CHOMP @cite_23 ). These approaches mainly focus on the optimization (such as smoothness) but can be used as stand alone motion planners for computing collision-free trajectories. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2161819990"
],
"abstract": [
"In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | The complexity of the physics-based motion planning is very high due to the high-dimensional state space, large search space and highly constrained solution set. A few physics-based motion planning approaches have been proposed that addressed the above mentioned issues, such as the Behavioral Kinodynamic Balanced Growth Trees (BK-BGT) and the Behavioral Kinodynamic Rapidly-Exploring Random Trees (BK-RRT) proposed in @cite_32 that use a strategy to reduce the search space based on a nondeterministic tactic modeled using a finite state machine, along with skills used to control the sampling. The propagation step is performed using PhysX @cite_24 . A hybrid approach is proposed in @cite_17 that equips the physics-based motion planner with knowledge (in the form of ontologies) about the robot's manipulation world. It uses a knowledge-based reasoning process to reduce the robot search space and guide the motion planner by defining the way objects can be manipulated. It uses RRT and KPIECE as kinodynamic motion planner and ODE as state propagator. This approach has also been used in task planning approaches @cite_0 @cite_11 for the physics-based reasoning process to determine the feasibility of the plan by evaluating the dynamic cost of each subaction in the task plan. | {
"cite_N": [
"@cite_32",
"@cite_17",
"@cite_24",
"@cite_0",
"@cite_11"
],
"mid": [
"21025882",
"2952805835",
"",
"1904505005",
"2396890510"
],
"abstract": [
"Motion planning for mobile agents, such as robots, acting in the physical world is a challenging task, which traditionally concerns safe obstacle avoidance. We are interested in physics-based planning beyond collision-free navigation goals, in which the agent also needs to achieve its goals, including purposefully manipulate non-actuated bodies, in environments that contain multiple physically interacting bodies with varying degrees of controllability. Physics-based planning is computationally hard due to the large number of continuous motion actions and to the difficulty in accurately modeling the rich interactions of such controlled, manipulatable, and uncontrolled, potentially adversarial, bodies. We contribute an efficient physics-based planning algorithm that uses the agent's high-level behaviors to reduce its motion action space. We first discuss the general physics-based planning problem. We then introduce Tactics and Skills as a model for infusing goal-driven, higher level behaviors into a randomized motion planner. We present a physics-based state and transition model that employs rigid body simulations to approximate real-world interbody-dynamics. We introduce and compare two variations of our tactics-driven, physics-based planning algorithm, namely Behavioral Kinodynamic Balanced Growth Trees and Behavioral Kinodynamic Rapidly-Exploring Random Trees. We tested our physics-based planners in a variety of rich domains and show results in simulated domains where the agent manipulates an object in a dynamic non-adversarial and adversarial environment, namely in a robot minigolf and robot soccer domain, respectively.",
"Robotic manipulation involves actions where contacts occur between the robot and the objects. In this scope, the availability of physics-based engines allows motion planners to comprise dynamics between rigid bodies, which is necessary for planning this type of actions. However, physics-based motion planning is computationally intensive due to the high dimensionality of the state space and the need to work with a low integration step to find accurate solutions. On the other hand, manipulation actions change the environment and conditions further actions and motions. To cope with this issue, the representation of manipulation actions using ontologies enables a semantic-based inference process that alleviates the computational cost of motion planning. This paper proposes a manipulation planning framework where physics-based motion planning is enhanced with ontological knowledge representation and reasoning. The proposal has been implemented and is illustrated and validated with a simple example. Its use in grasping tasks in cluttered environments is currently under development.",
"",
"For everyday manipulation tasks, the combination of task and motion planning is required regarding the need of providing the set of possible subtasks which have to be done and how to perform them. Since many alternative plans may exist, the determination of their feasibility and the identification of the best one is a great challenge in robotics. To address this, this paper proposes: a) a version of GraphPlan (one of the best current approaches to task planning) that has been modified to use ontological knowledge and to allow the retrieval of all possible plans; and b) a physics-based reasoning process that determines the feasibility of the resulting plans and an associated cost that allows to select the best one among them. The proposed framework has been implemented and is illustrated through an example.",
"To cope with the growing complexity of manipulation tasks, the way to combine and access information from high- and low-planning levels has recently emerged as an interesting challenge in robotics. To tackle this, the present paper first represents the manipulation problem, involving knowledge about the world and the planning phase, in the form of an ontology. It also addresses a high-level and a low-level reasoning processes, this latter related with physics-based issues, aiming to appraise manipulation actions and prune the task planning phase from dispensable actions. In addition, a procedure is contributed to run these two-level reasoning processes simultaneously in order to make task planning more efficient. Eventually, the proposed planning approach is implemented and simulated through an example."
]
} |
1710.00239 | 2759531375 | Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called κ-PMP can be used with any kinodynamic planner, thus giving rise to e.g. κ-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths. | Some other approaches address problem related to physics-based motion planning such as physics-based grasping and rearrangement planning @cite_35 @cite_34 . These approaches evaluate the dynamic interaction by executing the straight line trajectories under the quasi static assumption. Moreover some approaches (such as @cite_29 @cite_2 ) studied the rearrangement planing in conjunction with the physics-based motion planning, but none of them addressed the issue of robust control selection for the power efficient solution. | {
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_34",
"@cite_2"
],
"mid": [
"143499627",
"1608892862",
"",
"2540258482"
],
"abstract": [
"Humans use a remarkable set of strategies to manipulate objects in clutter. We pick up, push, slide, and sweep with our hands and arms to rearrange clutter surrounding our primary task. But our robots treat the world like the Tower of Hanoi — moving with pick-and-place actions and fearful to interact with it with anything but rigid grasps. This produces inefficient plans and is often inapplicable with heavy, large, or otherwise ungraspable objects. We introduce a framework for planning in clutter that uses a library of actions inspired by human strategies. The action library is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.",
"In this work we present a fast kinodynamic RRT-planner that uses dynamic nonprehensile actions to rearrange cluttered environments. In contrast to many previous works, the presented planner is not restricted to quasi-static interactions and monotonicity. Instead the results of dynamic robot actions are predicted using a black box physics model. Given a general set of primitive actions and a physics model, the planner randomly explores the configuration space of the environment to find a sequence of actions that transform the environment into some goal configuration.",
"",
"In this paper, we address the problem of navigation among movable obstacles (NAMO): a practical extension to navigation for humanoids and other dexterous mobile robots. The robot is permitted to reconfigure the environment by moving obstacles and clearing free space for a path. Simpler problems have been shown to be P-SPACE hard. For real-world scenarios with large numbers of movable obstacles, complete motion planning techniques are largely intractable. This paper presents a resolution complete planner for a subclass of NAMO problems. Our planner takes advantage of the navigational structure through state-space decomposition and heuristic search. The planning complexity is reduced to the difficulty of the specific navigation task, rather than the dimensionality of the multi-object domain. We demonstrate real-time results for spaces that contain large numbers of movable obstacles. We also present a practical framework for single-agent search that can be used in algorithmic reasoning about this domain."
]
} |
1710.00274 | 2761977277 | We are developing a system for human-robot communication that enables people to communicate with robots in a natural way and is focused on solving problems in a shared space. Our strategy for developing this system is fundamentally data-driven: we use data from multiple input sources and train key components with various machine learning techniques. We developed a web application that is collecting data on how two humans communicate to accomplish a task, as well as a mobile laboratory that is instrumented to collect data on how two humans communicate to accomplish a task in a physically shared space. The data from these systems will be used to train and fine-tune the second stage of our system, in which the robot will be simulated through software. A physical robot will be used in the final stage of our project. We describe these instruments, a test-suite and performance metrics designed to evaluate and automate the data gathering process as well as evaluate an initial data set. | Research on human-robot interaction (HRI) has long focused on both language and gesture @cite_8 @cite_12 @cite_0 . This research has looked at table-top manipulation of objects, often blocks @cite_6 @cite_14 @cite_13 @cite_5 @cite_1 @cite_2 . Until very recently, much of this research has focused on grounding references and spatial attributes through formulaic approaches using a limited vocabulary and a limited set of gestures @cite_14 . Recently there has been an increased interest in data-driven approaches based on large data sets @cite_14 @cite_16 @cite_3 @cite_13 @cite_0 . Most of this work has focused on data collection through simulation and online games @cite_14 @cite_16 @cite_13 . In particular the work by used annotated sequences of actions of a simulated block game to train neural networks to identify the commands to move a block from one location to another @cite_14 . expanded on this work by using reinforcement learning to take the language and image data from @cite_14 and directly plan actions @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2467589492",
"2592523922",
"2611884151",
"2736923367",
"2403891327",
"2516310909",
"",
"",
"2738320194",
"2182004781",
"2550322026",
""
],
"abstract": [
"",
"Abstract Recently, the concept of human-robot collaboration has raised many research interests. Instead of robots replacing human workers in workplaces, human-robot collaboration allows human workers and robots working together in a shared manufacturing environment. Human-robot collaboration can release human workers from heavy tasks with assistive robots if effective communication channels between humans and robots are established. Although the communication channels between human workers and robots are still limited, gesture recognition has been effectively applied as the interface between humans and computers for long time. Covering some of the most important technologies and algorithms of gesture recognition, this paper is intended to provide an overview of the gesture recognition research and explore the possibility to apply gesture recognition in human-robot collaborative manufacturing. In this paper, an overall model of gesture recognition for human-robot collaboration is also proposed. There are four essential technical components in the model of gesture recognition for human-robot collaboration: sensor technologies, gesture identification, gesture tracking and gesture classification. Reviewed approaches are classified according to the four essential technical components. Statistical analysis is also presented after technical analysis. Towards the end of this paper, future research trends are outlined.",
"We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.",
"Fetching items is an important problem for a social robot. It requires a robot to interpret a person's language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an item-fetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person's requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the item-delivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model's improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-the-art baselines. Our model is 2.17 seconds faster (25 faster) than a state-of-the-art baseline, while being 2.1 more accurate.",
"As robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to state-of-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively capture user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands.",
"Recent studies in human–robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human–human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.",
"",
"",
"It is natural for humans to work with abstract plans which are often an intuitive and concise way to represent a task. However, high level task descriptions contain symbols and concepts which need to be grounded within the environment if the plan is to be executed by an autonomous robot. The problem of learning the mapping between abstract plan symbols and their physical instances in the environment is known as the problem of physical symbol grounding. In this paper, we propose a framework for Grounding and Learning Instances through Demonstration and Eye tracking (GLIDE). We associate traces of task demonstration to a sequence of fixations which we call fixation programs and exploit their properties in order to perform physical symbol grounding. We formulate the problem as a probabilistic generative model and present an algorithm for computationally feasible inference over the proposed model. A key aspect of our work is that we estimate fixation locations within the environment which enables the appearance of symbol instances to be learnt. Instance learning is a crucial ability when the robot does not have any knowledge about the model or the appearance of the symbols referred to in the plan instructions. We have conducted human experiments and demonstrate that GLIDE successfully grounds plan symbols and learns the appearance of their instances, thus enabling robots to autonomously execute tasks in initially unknown environments.",
"Robots require a broad range of interaction skills in order to work effectively alongside humans. They must have the ability to detect and recognize the actions and intentions of a person, produce functionally valid and situationally appropriate actions, and engage in social interactions through physical cues and dialog. However, social interactions with one of today’s robots will quickly become one-sided and repetitive, even after just a few minutes due to its shallow depth of knowledge and experience. This problem exposes weaknesses in the underlying traditional approaches that aim to pre-code responses to a limited number of inputs. We propose the use of crowdsourcing as a tool for the development of social robots that allow for rich, diverse and natural human-robot interaction. To enable crowdsourcing at a massive scale, we describe a newly implemented system that uses online virtual agents to collect data and then leverages the resulting corpus to train our robot behavior system for use on a real world task.",
"As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans. We then focus our analysis on the clarity of instructions with respect to perspective taking and spatial references. We find that only about 42 of instructions contain perspective-independent spatial references. There is a strong correlation between participants' accuracy in executing instructions and the perspectives that the instructions are given in, as well between accuracy and the number of spatial relations that were required for the instruction. We conclude that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and we provide suggestions for automatic generation of spatial references.",
""
]
} |
1710.00274 | 2761977277 | We are developing a system for human-robot communication that enables people to communicate with robots in a natural way and is focused on solving problems in a shared space. Our strategy for developing this system is fundamentally data-driven: we use data from multiple input sources and train key components with various machine learning techniques. We developed a web application that is collecting data on how two humans communicate to accomplish a task, as well as a mobile laboratory that is instrumented to collect data on how two humans communicate to accomplish a task in a physically shared space. The data from these systems will be used to train and fine-tune the second stage of our system, in which the robot will be simulated through software. A physical robot will be used in the final stage of our project. We describe these instruments, a test-suite and performance metrics designed to evaluate and automate the data gathering process as well as evaluate an initial data set. | The work by @cite_3 uses sensor data from real world human-human interaction to train a robot for HRI. In our research we hope to initially use both simulation and real world sensor data from interaction and combine this data and then expand to data gained from interaction between our algorithm and a human in both simulated and real world environments. Rather than using human description of fixed actions and scenarios @cite_14 @cite_13 our initial data contains interactions between two humans. However, unlike @cite_3 we are using a human-as-a-robot approach which limits the abilities of the person in the "robot" role, to make their role analogous to the physical robot of our eventual system. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_3"
],
"mid": [
"2467589492",
"2550322026",
"2516310909"
],
"abstract": [
"",
"As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans. We then focus our analysis on the clarity of instructions with respect to perspective taking and spatial references. We find that only about 42 of instructions contain perspective-independent spatial references. There is a strong correlation between participants' accuracy in executing instructions and the perspectives that the instructions are given in, as well between accuracy and the number of spatial relations that were required for the instruction. We conclude that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and we provide suggestions for automatic generation of spatial references.",
"Recent studies in human–robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human–human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise."
]
} |
1710.00132 | 2762736764 | For intelligent robotics applications, extending 3D mapping to 3D semantic mapping enables robots to, not only localize themselves with respect to the scene's geometrical features but also simultaneously understand the higher level meaning of the scene contexts. Most previous methods focus on geometric 3D reconstruction and scene understanding independently notwithstanding the fact that joint estimation can boost the accuracy of the semantic mapping. In this paper, a dense RGB-D semantic mapping system with a Pixel-Voxel network is proposed, which can perform dense 3D mapping while simultaneously recognizing and semantically labelling each point in the 3D map. The proposed Pixel-Voxel network obtains global context information by using PixelNet to exploit the RGB image and meanwhile, preserves accurate local shape information by using VoxelNet to exploit the corresponding 3D point cloud. Unlike the existing architecture that fuses score maps from different models with equal weights, we proposed a Softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet, and fuses the score maps of the two models according to their respective confidence levels. The proposed Pixel-Voxel network achieves the state-of-the-art semantic segmentation performance on the SUN RGB-D benchmark dataset. The runtime of the proposed system can be boosted to 11-12Hz, enabling near to real-time performance using an i7 8-cores PC with Titan X GPU. | To the best of our knowledge, the online dense 3D semantic mapping can be further grouped into three main sub-categories: semantic mapping based on 3D template matching @cite_25 @cite_13 , 2D 2.5D semantic segmentation @cite_19 @cite_16 @cite_8 @cite_26 @cite_1 and RGB-D data association from multiple viewpoints @cite_21 @cite_24 @cite_0 . | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2952280228",
"2523049145",
"2604365427",
"2602764693",
"2951620021",
"2033979122",
"",
"2167687475",
"2415351208",
"2097696373"
],
"abstract": [
"Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.",
"Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance — they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.",
"3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of the scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on a real world dataset and a synthetic dataset with RGB-D videos demonstrate the ability of our method in semantic 3D scene mapping.",
"This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, met al etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.",
"Visual scene understanding is an important capability that enables robots to purposefully act in their environment. In this paper, we propose a novel approach to object-class segmentation from multiple RGB-D views using deep learning. We train a deep neural network to predict object-class semantics that is consistent from several view points in a semi-supervised way. At test time, the semantics predictions of our network can be fused more consistently in semantic keyframe maps than predictions of a network trained on individual views. We base our network architecture on a recent single-view deep learning approach to RGB and depth fusion for semantic object-class segmentation and enhance it with multi-scale loss minimization. We obtain the camera trajectory using RGB-D SLAM and warp the predictions of RGB-D images into ground-truth annotated frames in order to enforce multi-view consistency during training. At test time, predictions from multiple views are fused into keyframes. We propose and analyze several methods for enforcing multi-view consistency during training and testing. We evaluate the benefit of multi-view consistency training and demonstrate that pooling of deep features and fusion over multiple views outperforms single-view baselines on the NYUDv2 benchmark for semantic segmentation. Our end-to-end trained network achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion.",
"Dense semantic segmentation of 3D point clouds is a challenging task. Many approaches deal with 2D semantic segmentation and can obtain impressive results. With the availability of cheap RGB-D sensors the field of indoor semantic segmentation has seen a lot of progress. Still it remains unclear how to deal with 3D semantic segmentation in the best way. We propose a novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields. This approach allows us to use 2D semantic segmentations to create a consistent 3D semantic reconstruction of indoor scenes. To this end, we also propose a fast 2D semantic segmentation approach based on Randomized Decision Forests. Furthermore, we show that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions. We evaluate our approach on both NYU Depth datasets and show that we can obtain a significant speed-up compared to other methods.",
"",
"Our abilities in scene understanding, which allow us to perceive the 3D structure of our surroundings and intuitively recognise the objects we see, are things that we largely take for granted, but for robots, the task of understanding large scenes quickly remains extremely challenging. Recently, scene understanding approaches based on 3D reconstruction and semantic segmentation have become popular, but existing methods either do not scale, fail outdoors, provide only sparse reconstructions or are rather slow. In this paper, we build on a recent hash-based technique for large-scale fusion and an efficient mean-field inference algorithm for densely-connected CRFs to present what to our knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time. We also present a ‘semantic fusion’ approach that allows us to handle dynamic objects more effectively than previous approaches. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction and labelling of a number of scenes.",
"While the main trend of 3D object recognition has been to infer object detection from single views of the scene — i.e., 2.5D data — this work explores the direction on performing object recognition on 3D data that is reconstructed from multiple viewpoints, under the conjecture that such data can improve the robustness of an object recognition system. To achieve this goal, we propose a framework which is able (i) to carry out incremental real-time segmentation of a 3D scene while being reconstructed via Simultaneous Localization And Mapping (SLAM), and (ii) to simultaneously and incrementally carry out 3D object recognition and pose estimation on the reconstructed and segmented 3D representations. Experimental results demonstrate the advantages of our approach with respect to traditional single view-based object recognition and pose estimation approaches, as well as its usefulness in robotic perception and augmented reality applications.",
"We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction."
]
} |
1710.00132 | 2762736764 | For intelligent robotics applications, extending 3D mapping to 3D semantic mapping enables robots to, not only localize themselves with respect to the scene's geometrical features but also simultaneously understand the higher level meaning of the scene contexts. Most previous methods focus on geometric 3D reconstruction and scene understanding independently notwithstanding the fact that joint estimation can boost the accuracy of the semantic mapping. In this paper, a dense RGB-D semantic mapping system with a Pixel-Voxel network is proposed, which can perform dense 3D mapping while simultaneously recognizing and semantically labelling each point in the 3D map. The proposed Pixel-Voxel network obtains global context information by using PixelNet to exploit the RGB image and meanwhile, preserves accurate local shape information by using VoxelNet to exploit the corresponding 3D point cloud. Unlike the existing architecture that fuses score maps from different models with equal weights, we proposed a Softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet, and fuses the score maps of the two models according to their respective confidence levels. The proposed Pixel-Voxel network achieves the state-of-the-art semantic segmentation performance on the SUN RGB-D benchmark dataset. The runtime of the proposed system can be boosted to 11-12Hz, enabling near to real-time performance using an i7 8-cores PC with Titan X GPU. | The first kind of methods such as SLAM++ @cite_25 can only recognise the known 3D objects in a pre-defined database. It is limited to can only be used in the situations where many repeated and identical objects are present for semantic mapping. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2097696373"
],
"abstract": [
"We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction."
]
} |
1710.00132 | 2762736764 | For intelligent robotics applications, extending 3D mapping to 3D semantic mapping enables robots to, not only localize themselves with respect to the scene's geometrical features but also simultaneously understand the higher level meaning of the scene contexts. Most previous methods focus on geometric 3D reconstruction and scene understanding independently notwithstanding the fact that joint estimation can boost the accuracy of the semantic mapping. In this paper, a dense RGB-D semantic mapping system with a Pixel-Voxel network is proposed, which can perform dense 3D mapping while simultaneously recognizing and semantically labelling each point in the 3D map. The proposed Pixel-Voxel network obtains global context information by using PixelNet to exploit the RGB image and meanwhile, preserves accurate local shape information by using VoxelNet to exploit the corresponding 3D point cloud. Unlike the existing architecture that fuses score maps from different models with equal weights, we proposed a Softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet, and fuses the score maps of the two models according to their respective confidence levels. The proposed Pixel-Voxel network achieves the state-of-the-art semantic segmentation performance on the SUN RGB-D benchmark dataset. The runtime of the proposed system can be boosted to 11-12Hz, enabling near to real-time performance using an i7 8-cores PC with Titan X GPU. | For the second kind of methods, both @cite_19 and @cite_16 adopt human-design features with Random Decision Forests to perform per-pixel label predictions of the incoming RGB videos. Then all the semantically labelled images are associated together using a visual odometry to generate the semantic map. Because of the state-of-the-art performance provided by the CNN-based scene understanding, SemanticFusion @cite_8 integrates deconvolution neural networks @cite_2 with ElasticFusion @cite_3 to a real-time capable ( @math ) semantic mapping system. All of those three methods require fully connected CRF @cite_12 optimization as an offline post-processing, i.e., the best performance semantic mapping is not an online system. Zhao @cite_1 . proposed the first system to perform simultaneous 3D mapping and pixel-wise material recognition. It integrates CRF-RNN @cite_14 with RGB-D SLAM @cite_23 and the post-processing optimization is not required. Keisuke @cite_26 proposed a real-time dense monocular CNN-SLAM, which can perform depth prediction and semantic segmentation simultaneously from a single image using a deep neural network. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"2204696980",
"2952280228",
"2523049145",
"2602764693",
"2250172176",
"2033979122",
"2069479606",
"1745334888",
"2167687475",
""
],
"abstract": [
"In image labeling, local representations for image units are usually generated from their surrounding image patches, thus long-range contextual information is not effectively encoded. In this paper, we introduce recurrent neural networks (RNNs) to address this issue. Specifically, directed acyclic graph RNNs (DAG-RNNs) are proposed to process DAG-structured images, which enables the network to model long-range semantic dependencies among image units. Our DAG-RNNs are capable of tremendously enhancing the discriminative power of local representations, which significantly benefits the local classification. Meanwhile, we propose a novel class weighting function that attends to rare classes, which phenomenally boosts the recognition accuracy for non-frequent classes. Integrating with convolution and deconvolution layers, our DAG-RNNs achieve new state-of-the-art results on the challenging SiftFlow, CamVid and Barcelona benchmarks.",
"Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.",
"Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance — they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.",
"This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, met al etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.",
"We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.",
"Dense semantic segmentation of 3D point clouds is a challenging task. Many approaches deal with 2D semantic segmentation and can obtain impressive results. With the availability of cheap RGB-D sensors the field of indoor semantic segmentation has seen a lot of progress. Still it remains unclear how to deal with 3D semantic segmentation in the best way. We propose a novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields. This approach allows us to use 2D semantic segmentations to create a consistent 3D semantic reconstruction of indoor scenes. To this end, we also propose a fast 2D semantic segmentation approach based on Randomized Decision Forests. Furthermore, we show that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions. We evaluate our approach on both NYU Depth datasets and show that we can obtain a significant speed-up compared to other methods.",
"In this paper, we present a novel mapping system that robustly generates highly accurate 3-D maps using an RGB-D camera. Our approach requires no further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners, as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3-D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast camera motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open source and has already been widely adopted by the robotics community.",
"We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.",
"Our abilities in scene understanding, which allow us to perceive the 3D structure of our surroundings and intuitively recognise the objects we see, are things that we largely take for granted, but for robots, the task of understanding large scenes quickly remains extremely challenging. Recently, scene understanding approaches based on 3D reconstruction and semantic segmentation have become popular, but existing methods either do not scale, fail outdoors, provide only sparse reconstructions or are rather slow. In this paper, we build on a recent hash-based technique for large-scale fusion and an efficient mean-field inference algorithm for densely-connected CRFs to present what to our knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time. We also present a ‘semantic fusion’ approach that allows us to handle dynamic objects more effectively than previous approaches. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction and labelling of a number of scenes.",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.