aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1610.07126 | 2949447882 | In this paper, we address the multi-view subspace clustering problem. Our method utilizes the circulant algebra for tensor, which is constructed by stacking the subspace representation matrices of different views and then rotating, to capture the low rank tensor subspace so that the refinement of the view-specific subspaces can be achieved, as well as the high order correlations underlying multi-view data can be explored. By introducing a recently proposed tensor factorization, namely tensor-Singular Value Decomposition (t-SVD) kilmer13 , we can impose a new type of low-rank tensor constraint on the rotated tensor to capture the complementary information from multiple views. Different from traditional unfolding based tensor norm, this low-rank tensor constraint has optimality properties similar to that of matrix rank derived from SVD, so the complementary information among views can be explored more efficiently and thoroughly. The established model, called t-SVD based Multi-view Subspace Clustering (t-SVD-MSC), falls into the applicable scope of augmented Lagrangian method, and its minimization problem can be efficiently solved with theoretical convergence guarantee and relatively low computational complexity. Extensive experimental testing on eight challenging image dataset shows that the proposed method has achieved highly competent objective performance compared to several state-of-the-art multi-view clustering methods. | Multi-view clustering methods have been extensively studied in recent years, we roughly divide them into three categories in accordance with @cite_12 : 1) graph-based approaches, 2) co-training or co-regularized approaches, 3) subspace learning algorithms. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1670132599"
],
"abstract": [
"In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning."
]
} |
1610.07126 | 2949447882 | In this paper, we address the multi-view subspace clustering problem. Our method utilizes the circulant algebra for tensor, which is constructed by stacking the subspace representation matrices of different views and then rotating, to capture the low rank tensor subspace so that the refinement of the view-specific subspaces can be achieved, as well as the high order correlations underlying multi-view data can be explored. By introducing a recently proposed tensor factorization, namely tensor-Singular Value Decomposition (t-SVD) kilmer13 , we can impose a new type of low-rank tensor constraint on the rotated tensor to capture the complementary information from multiple views. Different from traditional unfolding based tensor norm, this low-rank tensor constraint has optimality properties similar to that of matrix rank derived from SVD, so the complementary information among views can be explored more efficiently and thoroughly. The established model, called t-SVD based Multi-view Subspace Clustering (t-SVD-MSC), falls into the applicable scope of augmented Lagrangian method, and its minimization problem can be efficiently solved with theoretical convergence guarantee and relatively low computational complexity. Extensive experimental testing on eight challenging image dataset shows that the proposed method has achieved highly competent objective performance compared to several state-of-the-art multi-view clustering methods. | The first stream is the graph-based approaches @cite_3 @cite_8 @cite_32 @cite_42 @cite_33 which exploit the relationship among different views by using multiple graph fusion strategy. @cite_3 constructed a bipartite graph underlying the minimizing-disagreement criterion to connect the two-view feature, and then solved standard spectral clustering problem on the bipartite graph. The method @cite_42 proposed to learn a latent graph transition probability matrix via low-rank and sparse decomposition to handle the noise from different views. Given graphs constructed separately from single view data, @cite_33 built cross-view tensor product graphs to explore higher order information. Moreover, graph based algorithms is closely related to Multiple Kernel Learning (MKL) technique, in which views are considered as given kernel matrices. The aim is to learn the weighted combination of these kernel and the partitioning simultaneously @cite_48 . | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_48",
"@cite_42",
"@cite_32",
"@cite_3"
],
"mid": [
"2316282451",
"2136294701",
"2169529055",
"201974436",
"2113573459",
""
],
"abstract": [
"Multi-view clustering takes diversity of multiple views (representations) into consideration. Multiple views may be obtained from various sources or dierent feature subsets and often provide complementary information to each other. In this paper, we propose a novel graph-based approach to integrate multiple representations to improve clustering performance. While original graphs have been widely used in many existing multi-view clustering approaches, the key idea of our approach is to integrate multiple views by exploring higher order information. In particular, given graphs constructed separately from single view data, we build cross-view tensor product graphs (TPGs), each of which is a Kronecker product of a pair of single-view graphs. Since each cross-view TPG captures higher order relationships of data under two dierent views, it is no surprise that we obtain more reliable similarities. We linearly combine multiple cross-view TPGs to integrate higher order information. Ecient graph diusion process on the fusion TPG helps to reveal the underlying cluster structure and boosts the clustering performance. Empirical study shows that the proposed approach outperforms state-of-the-art methods on benchmark datasets.",
"We consider spectral clustering and transductive inference for data with multiple views. A typical example is the web, which can be described by either the hyperlinks between web pages or the words occurring in web pages. When each view is represented as a graph, one may convexly combine the weight matrices or the discrete Laplacians for each graph, and then proceed with existing clustering or classification techniques. Such a solution might sound natural, but its underlying principle is not clear. Unlike this kind of methodology, we develop multiview spectral clustering via generalizing the normalized cut from a single view to multiple views. We further build multiview transductive inference on the basis of multiview spectral clustering. Our framework leads to a mixture of Markov chains defined on every graph. The experimental evaluation on real-world web classification demonstrates promising results that validate our method.",
"Exploiting multiple representations, or views, for the same set of instances within a clustering framework is a popular practice for boosting clustering accuracy. However, some of the available sources may be misleading (due to noise, errors in measurement etc.) in revealing the true structure of the data, thus, their inclusion in the clustering process may have negative influence. This aspect seems to be overlooked in the multi-view literature where all representations are equally considered. In this work, views are expressed in terms of given kernel matrices and a weighted combination of the kernels is learned in parallel to the partitioning. Weights assigned to kernels are indicative of the quality of the corresponding views' information. Additionally, the combination scheme incorporates a parameter that controls the admissible sparsity of the weights to avoid extremes and tailor them to the data. Two efficient iterative algorithms are proposed that alternate between updating the view weights and recomputing the clusters to optimize the intra-cluster variance from different perspectives. The conducted experiments reveal the effectiveness of our methodology compared to other multi-view methods.",
"Multi-view clustering, which seeks a partition of the data in multiple views that often provide complementary information to each other, has received considerable attention in recent years. In real life clustering problems, the data in each view may have considerable noise. However, existing clustering methods blindly combine the information from multi-view data with possibly considerable noise, which often degrades their performance. In this paper, we propose a novel Markov chain method for Robust Multi-view Spectral Clustering (RMSC). Our method has a flavor of lowrank and sparse decomposition, where we firstly construct a transition probability matrix from each single view, and then use these matrices to recover a shared low-rank transition probability matrix as a crucial input to the standard Markov chain method for clustering. The optimization problem of RMSC has a low-rank constraint on the transition probability matrix, and simultaneously a probabilistic simplex constraint on each of its rows. To solve this challenging optimization problem, we propose an optimization procedure based on the Augmented Lagrangian Multiplier scheme. Experimental results on various real world datasets show that the proposed method has superior performance over several state-of-the-art methods for multi-view clustering.",
"In graph-based learning models, entities are often represented as vertices in an undirected graph with weighted edges describing the relationships between entities. In many real-world applications, however, entities are often associated with relations of different types and or from different sources, which can be well captured by multiple undirected graphs over the same set of vertices. How to exploit such multiple sources of information to make better inferences on entities remains an interesting open problem. In this paper, we focus on the problem of clustering the vertices based on multiple graphs in both unsupervised and semi-supervised settings. As one of our contributions, we propose Linked Matrix Factorization (LMF) as a novel way of fusing information from multiple graph sources. In LMF, each graph is approximated by matrix factorization with a graph-specific factor and a factor common to all graphs, where the common factor provides features for all vertices. Experiments on SIAM journal data show that (1) we can improve the clustering accuracy through fusing multiple sources of information with several models, and (2) LMF yields superior or competitive results compared to other graph-based clustering methods.",
""
]
} |
1610.07126 | 2949447882 | In this paper, we address the multi-view subspace clustering problem. Our method utilizes the circulant algebra for tensor, which is constructed by stacking the subspace representation matrices of different views and then rotating, to capture the low rank tensor subspace so that the refinement of the view-specific subspaces can be achieved, as well as the high order correlations underlying multi-view data can be explored. By introducing a recently proposed tensor factorization, namely tensor-Singular Value Decomposition (t-SVD) kilmer13 , we can impose a new type of low-rank tensor constraint on the rotated tensor to capture the complementary information from multiple views. Different from traditional unfolding based tensor norm, this low-rank tensor constraint has optimality properties similar to that of matrix rank derived from SVD, so the complementary information among views can be explored more efficiently and thoroughly. The established model, called t-SVD based Multi-view Subspace Clustering (t-SVD-MSC), falls into the applicable scope of augmented Lagrangian method, and its minimization problem can be efficiently solved with theoretical convergence guarantee and relatively low computational complexity. Extensive experimental testing on eight challenging image dataset shows that the proposed method has achieved highly competent objective performance compared to several state-of-the-art multi-view clustering methods. | Subspace learning approaches are built on the assumption that all the views are generated from a latent subspace. Its goal is to capture shared latent subspace first and then conduct clustering. The representative methods in this stream are proposed in @cite_26 @cite_5 , which applied canonical correlation analysis (CCA) and kernel CCA to project the multi-view high-dimensional data onto a low-dimensional subspace, respectively. By including robust losses to replace the squared loss used in CCA, @cite_46 provided a convex reformulation of multi-view subspace learning that enforces conditional independence between views. Inspired by deep representation, @cite_35 proposed a DNN-based model combining CCA and autoencoder-based terms to exploit the deep information from two views. Since those CCA based methods are limited by capability of only handling two-view features, tensor CCA @cite_52 generalized CCA to handle the data of an arbitrary number of views by analyzing the covariance tensor of different views. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_52",
"@cite_5",
"@cite_46"
],
"mid": [
"",
"2142674578",
"1672347394",
"2123576058",
"2158703881"
],
"abstract": [
"",
"Clustering data in high dimensions is believed to be a hard problem in general. A number of efficient clustering algorithms developed in recent years address this problem by projecting the data into a lower-dimensional subspace, e.g. via Principal Components Analysis (PCA) or random projections, before clustering. Here, we consider constructing such projections using multiple views of the data, via Canonical Correlation Analysis (CCA). Under the assumption that the views are un-correlated given the cluster label, we show that the separation conditions required for the algorithm to be successful are significantly weaker than prior results in the literature. We provide results for mixtures of Gaussians and mixtures of log concave distributions. We also provide empirical support from audio-visual speaker clustering (where we desire the clusters to correspond to speaker ID) and from hierarchical Wikipedia document clustering (where one view is the words in the document and the other is the link structure).",
"Canonical correlation analysis (CCA) has proven an effective tool for two-view dimension reduction due to its profound theoretical foundation and success in practical applications. In respect of multi-view learning, however, it is limited by its capability of only handling data represented by two-view features, while in many real-world applications, the number of views is frequently many more. Although the ad hoc way of simultaneously exploring all possible pairs of features can numerically deal with multi-view data, it ignores the high order statistics (correlation information) which can only be discovered by simultaneously exploring all features. Therefore, in this work, we develop tensor CCA (TCCA) which straightforwardly yet naturally generalizes CCA to handle the data of an arbitrary number of views by analyzing the covariance tensor of the different views. TCCA aims to directly maximize the canonical correlation of multiple (more than two) views. Crucially, we prove that the main problem of multi-view canonical correlation maximization is equivalent to finding the best rank- @math approximation of the data covariance tensor, which can be solved efficiently using the well-known alternating least squares (ALS) algorithm. As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained. In addition, a non-linear extension of TCCA is presented. Experiments on various challenge tasks, including large scale biometric structure prediction, internet advertisement classification, and web image annotation, demonstrate the effectiveness of the proposed method",
"We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.",
"Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction. However, in many applications, data is obtained from multiple sources rather than a single source (e.g. an object might be viewed by cameras at different angles, or a document might consist of text and images). The conditional independence of separate sources imposes constraints on their shared latent representation, which, if respected, can improve the quality of a learned low dimensional representation. In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. For this formulation, we develop an efficient algorithm that recovers an optimal data reconstruction by exploiting an implicit convex regularizer, then recovers the corresponding latent representation and reconstruction model, jointly and optimally. Experiments illustrate that the proposed method produces high quality results."
]
} |
1610.07126 | 2949447882 | In this paper, we address the multi-view subspace clustering problem. Our method utilizes the circulant algebra for tensor, which is constructed by stacking the subspace representation matrices of different views and then rotating, to capture the low rank tensor subspace so that the refinement of the view-specific subspaces can be achieved, as well as the high order correlations underlying multi-view data can be explored. By introducing a recently proposed tensor factorization, namely tensor-Singular Value Decomposition (t-SVD) kilmer13 , we can impose a new type of low-rank tensor constraint on the rotated tensor to capture the complementary information from multiple views. Different from traditional unfolding based tensor norm, this low-rank tensor constraint has optimality properties similar to that of matrix rank derived from SVD, so the complementary information among views can be explored more efficiently and thoroughly. The established model, called t-SVD based Multi-view Subspace Clustering (t-SVD-MSC), falls into the applicable scope of augmented Lagrangian method, and its minimization problem can be efficiently solved with theoretical convergence guarantee and relatively low computational complexity. Extensive experimental testing on eight challenging image dataset shows that the proposed method has achieved highly competent objective performance compared to several state-of-the-art multi-view clustering methods. | Besides CCA, the recent proposed subspace clustering methods @cite_28 @cite_25 resorted to explore the relationship between samples with self-representation ( e.g., sparse subspace clustering (SSC) @cite_23 and low-rank representation (LRR) @cite_21 ) in multi-view setting. Our approach is closely related to @cite_25 , which extended the LRR based subspace clustering to multi-view by employing the rank-sum of different mode unfoldings to constrain the subspace coefficient tensor. However, such a kind of tensor constraint lacks a clear physical meaning for general tensor, so that it can not thoroughly explore the complementary information among different views. On the contrary, the high order constraint within our model is built upon a new tensor decomposition scheme @cite_16 @cite_2 , which is referred to as t-SVD and has been applied to various tasks, such as image reconstruction and tensor completion @cite_53 @cite_41 @cite_20 . Therefore, the proposed model possesses good theoretical properties and clear physical meaning for handling the subspace representation tensor. The detailed motivation will be presented in Section . | {
"cite_N": [
"@cite_28",
"@cite_41",
"@cite_53",
"@cite_21",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2030927653",
"2080843093",
"1997201895",
"1993962865",
"2043571470",
"1992426838",
"2197707282",
"2435918055"
],
"abstract": [
"",
"In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].",
"The development of energy selective, photon counting X-ray detectors allows for a wide range of new possibilities in the area of computed tomographic image formation. Under the assumption of perfect energy resolution, here we propose a tensor-based iterative algorithm that simultaneously reconstructs the X-ray attenuation distribution for each energy. We use a multilinear image model rather than a more standard stacked vector representation in order to develop novel tensor-based regularizers. In particular, we model the multispectral unknown as a three-way tensor where the first two dimensions are space and the third dimension is energy. This approach allows for the design of tensor nuclear norm regularizers, which like its 2D counterpart, is a convex function of the multispectral unknown. The solution to the resulting convex optimization problem is obtained using an alternating direction method of multipliers approach. Simulation results show that the generalized tensor nuclear norm can be used as a standalone regularization technique for the energy selective (spectral) computed tomography problem and when combined with total variation regularization it enhances the regularization capabilities especially at low energy images where the effects of noise are most prominent.",
"In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.",
"Abstract Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-1 outer products using either the CANDECOMP PARAFAC (CP) or the Tucker models, or some variation thereof. Such decompositions are motivated by specific applications where the goal is to find an approximate such representation for a given multiway array. The specifics of the approximate representation (such as how many terms to use in the sum, orthogonality constraints, etc.) depend on the application. In this paper, we explore an alternate representation of tensors which shows promise with respect to the tensor approximation problem. Reminiscent of matrix factorizations, we present a new factorization of a tensor as a product of tensors. To derive the new factorization, we define a closed multiplication operation between tensors. A major motivation for considering this new type of tensor multiplication is to devise new types of factorizations for tensors which can then be used in applications. Specifically, this new multiplication allows us to introduce concepts such as tensor transpose, inverse, and identity, which lead to the notion of an orthogonal tensor. The multiplication also gives rise to a linear operator, and the null space of the resulting operator is identified. We extend the concept of outer products of vectors to outer products of matrices. All derivations are presented for third-order tensors. However, they can be easily extended to the order-p ( p > 3 ) case. We conclude with an application in image deblurring.",
"Recent work by Kilmer and Martin [Linear Algebra Appl., 435 (2011), pp. 641--658] and Braman [Linear Algebra Appl., 433 (2010), pp. 1241--1253] provides a setting in which the familiar tools of linear algebra can be extended to better understand third-order tensors. Continuing along this vein, this paper investigates further implications including (1) a bilinear operator on the matrices which is nearly an inner product and which leads to definitions for length of matrices, angle between two matrices, and orthogonality of matrices, and (2) the use of t-linear combinations to characterize the range and kernel of a mapping defined by a third-order tensor and the t-product and the quantification of the dimensions of those sets. These theoretical results lead to the study of orthogonal projections as well as an effective Gram--Schmidt process for producing an orthogonal basis of matrices. The theoretical framework also leads us to consider the notion of tensor polynomials and their relation to tensor eigentupl...",
"In this paper, we explore the problem of multiview subspace clustering. We introduce a low-rank tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a tensor, which captures dexterously the high order correlations underlying multiview data. Then the tensor is equipped with a low-rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.",
"This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA ( 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor @math such that @math , where @math has low tubal rank and @math is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the @math -norm, i.e., @math , where @math . Interestingly, TRPCA involves RPCA as a special case when @math and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method."
]
} |
1610.07004 | 2951400550 | Understanding political phenomena requires measuring the political preferences of society. We introduce a model based on mixtures of spatial voting models that infers the underlying distribution of political preferences of voters with only voting records of the population and political positions of candidates in an election. Beyond offering a cost-effective alternative to surveys, this method projects the political preferences of voters and candidates into a shared latent preference space. This projection allows us to directly compare the preferences of the two groups, which is desirable for political science but difficult with traditional survey methods. After validating the aggregated-level inferences of this model against results of related work and on simple prediction tasks, we apply the model to better understand the phenomenon of political polarization in the Texas, New York, and Ohio electorates. Taken at face value, inferences drawn from our model indicate that the electorates in these states may be less bimodal than the distribution of candidates, but that the electorates are comparatively more extreme in their variance. We conclude with a discussion of limitations of our method and potential future directions for research. | There has been recent work in quantitative political science that is closely related to our work. For instance, researchers recently developed a technique for estimating the preferences of the electorate and elected officials from Twitter data using a probabilistic generative network model related to the spatial voting model we use @cite_26 . Some political scientists have used ideal point models, which are closely related to spatial voting models, to infer distributions of voter preferences from fine-grained voter data @cite_9 @cite_15 . Unlike our work, these previous works using voting data relied on individual-level voting data, which is difficult to obtain. Other political scientists have developed meta-analysis-like methods for aggregating survey results to improve accuracy and representativeness @cite_18 , but this work still suffers from the limitation of low coverage of survey data due to collection difficulties. Thus, the methods can only consider a coarser level of geographical granularity. | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_18",
"@cite_15"
],
"mid": [
"2160554276",
"2161834943",
"2147533464",
"2020731668"
],
"abstract": [
"This paper presents a method for inferring the distribution of voter ideal points on a single dimension from individual-level binary choice data. The statistical model and estimation technique draw heavily on the psychometric literature on test taking and, in particular, on the work of Bock and Aitkin (1981) and are similar to several recent methods of estimating legislative ideal points (Londregan 2000; Bailey 2001). I present Monte Carlo results vali dating the method. The method is then applied to determining the partisan and ideological basis of support for presidential candidates in 1992 and to U.S. mass and congressional partisan realignment on abortion policy since 1973.",
"Parties, candidates, and voters are becoming increasingly engaged in political conversations through the micro-blogging platform Twitter. In this paper I show that the structure of the social networks in which they are embedded has the potential to become a source of information about policy positions. Under the assumption that social networks are homophilic (, 2001), this is, the propensity of users to cluster along partisan lines, I develop a Bayesian Spatial Following model that scales Twitter users along a common ideological dimension based on who they follow. I apply this network-based method to estimate ideal points for Twitter users in the US, the UK, Spain, Italy, and the Netherlands. The resulting positions of the party accounts on Twitter are highly correlated with oine measures based on their voting records and their manifestos. Similarly, this method is able to successfully classify individuals who state their political orientation publicly, and a sample of users from the state of Ohio whose Twitter accounts are matched with their voter registration history. To illustrate the potential contribution of these estimates, I examine the extent to which online behavior is polarized along ideological lines. Using the 2012 US presidential election campaign as a case study, I nd that public exchanges on Twitter take place predominantly among users with similar viewpoints.",
"Little is known about the American public’s policy preferences at the level of Congressional districts, state legislative districts, and local municipalities. In this article, we overcome the limited sample sizes that have hindered previous research by jointly scaling the policy preferences of 275,000 Americans based on their responses to policy questions. We combine this large dataset of Americans’ policy preferences with recent advances in opinion estimation to estimate the preferences of every state, congressional district, state legislative district, and large city. We show that our estimates outperform previous measures of citizens’ policy preferences. These new estimates enable scholars to examine representation at a variety of geographic levels. We demonstrate the utility of these estimates through applications of our measures to examine representation in state legislatures and city governments.",
"Despite the centrality of the median voter prediction in political economy models, overwhelming empirical evidence shows that legislators regularly take positions that diverge significantly from the preferences of the median voter in their districts. However, all these empirical studies to date lack the necessary data to directly measure the preferences of the median voter. We utilize a unique data set consisting of individual-level voting data that allows us to construct direct measures of voter preferences. We find that legislators are most constrained by the preferences of the median voter in homogeneous districts."
]
} |
1610.07004 | 2951400550 | Understanding political phenomena requires measuring the political preferences of society. We introduce a model based on mixtures of spatial voting models that infers the underlying distribution of political preferences of voters with only voting records of the population and political positions of candidates in an election. Beyond offering a cost-effective alternative to surveys, this method projects the political preferences of voters and candidates into a shared latent preference space. This projection allows us to directly compare the preferences of the two groups, which is desirable for political science but difficult with traditional survey methods. After validating the aggregated-level inferences of this model against results of related work and on simple prediction tasks, we apply the model to better understand the phenomenon of political polarization in the Texas, New York, and Ohio electorates. Taken at face value, inferences drawn from our model indicate that the electorates in these states may be less bimodal than the distribution of candidates, but that the electorates are comparatively more extreme in their variance. We conclude with a discussion of limitations of our method and potential future directions for research. | Within the computer science field, our work falls closest to a growing line of research dedicated to developing novel machine learning models for computational social science. Machine learning researchers in this area have not yet addressed the exact problem we study in our work, to the best of our knowledge. However, they have been interested in similar problems and related classes of models (e.g. @cite_13 @cite_3 @cite_1 @cite_22 ). More tangentially, a large body of work in computer science has been dedicated to drawing inferences from public observational data. Some researchers have suggested using social media data to better understand public opinion @cite_21 , while others have developed models based on inconsistent user behavior to infer their implicit preferences @cite_2 . | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_13"
],
"mid": [
"2107107106",
"2069516633",
"",
"2073020428",
"1574648663",
""
],
"abstract": [
"Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.",
"On 3 November 1948, the day after Harry Truman won the United States presidential elections, the Chicago Tribune published one of the most famous erroneous headlines in newspaper history: “Dewey Defeats Truman” ( 1 , 2 ). The headline was informed by telephone surveys, which had inadvertently undersampled Truman supporters ( 1 ). Rather than permanently discrediting the practice of polling, this event led to the development of more sophisticated techniques and higher standards that produce the more accurate and statistically rigorous polls conducted today ( 3 ).",
"",
"We present a new solution to the ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"We propose a novel parameterized family of Mixed Membership Mallows Models (M4) to account for variability in pairwise comparisons generated by a heterogeneous population of noisy and inconsistent users. M4 models individual preferences as a user-specific probabilistic mixture of shared latent Mallows components. Our key algorithmic insight for estimation is to establish a statistical connection between M4 and topic models by viewing pairwise comparisons as words, and users as documents. This key insight leads us to explore Mallows components with a separable structure and leverage recent advances in separable topic discovery. While separability appears to be overly restrictive, we nevertheless show that it is an inevitable outcome of a relatively small number of latent Mallows components in a world of large number of items. We then develop an algorithm based on robust extreme-point identification of convex polygons to learn the reference rankings, and is provably consistent with polynomial sample complexity guarantees. We demonstrate that our new model is empirically competitive with the current state-of-the-art approaches in predicting real-world preferences.",
""
]
} |
1610.07238 | 2952702088 | In visual tracking, part-based trackers are attractive since they are robust against occlusion and deformation. However, a part represented by a rectangular patch does not account for the shape of the target, while a superpixel does thanks to its boundary evidence. Nevertheless, tracking superpixels is difficult due to their lack of discriminative power. Therefore, to enable superpixels to be tracked discriminatively as object parts, we propose to enhance them with keypoints. By combining properties of these two features, we build a novel element designated as a Superpixel-Keypoints structure (SPiKeS). Being discriminative, these new object parts can be located efficiently by a simple nearest neighbor matching process. Then, in a tracking process, each match votes for the target's center to give its location. In addition, the interesting properties of our new feature allows the development of an efficient model update for more robust tracking. According to experimental results, our SPiKeS-based tracker proves to be robust in many challenging scenarios by performing favorably against the state-of-the-art. | Our localization process is inspired by @cite_20 @cite_31 . Their approach assigns to each matching keypoint a vote for the center of the target, allowing keypoints to locate the target independently from each other. Hierarchical clustering then converges to a consensus of votes such that outliers are removed. Finally, the selected votes estimate the position as a simple center of mass. Furthermore, @cite_0 proposed to weight the votes according to the reliability of keypoints. We do the same, but instead of voting with keypoints, we vote with SPiKeS. | {
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_20"
],
"mid": [
"1980475298",
"",
"1993221692"
],
"abstract": [
"We present a novel part-based method for model-free tracking. In our model, key points are considered as elementary predictors, collaborating to localize the target. In order to differentiate reliable features from outliers and bad predictors, we define the notion of feature saliency including three factors: the persistence, the spatial consistency, and the predictive power of local features. Saliency information is learned during tracking to be used in several algorithmic steps: local predictions, global localization, feature removal, etc. By exploiting saliency information and key point structural properties, the proposed algorithm is able to track accurately generic objects, facing several difficulties such as occlusions, presence of distractors, and abrupt motion. The proposed tracker demonstrated a high robustness on challenging public datasets, outperforming significantly five recent state-of-the-art trackers.",
"",
"We propose a novel keypoint-based method for long-term model-free object tracking in a combined matching-and-tracking framework. In order to localise the object in every frame, each keypoint casts votes for the object center. As erroneous keypoints are hard to avoid, we employ a novel consensus-based scheme for outlier detection in the voting behaviour. To make this approach computationally feasible, we propose not to employ an accumulator space for votes, but rather to cluster votes directly in the image space. By transforming votes based on the current keypoint constellation, we account for changes of the object in scale and rotation. In contrast to competing approaches, we refrain from updating the appearance information, thus avoiding the danger of making errors. The use of fast keypoint detectors and binary descriptors allows for our implementation to run in real-time. We demonstrate experimentally on a diverse dataset that is as large as 60 sequences that our method outperforms the state-of-the-art when high accuracy is required and visualise these results by employing a variant of success plots."
]
} |
1610.06941 | 2543373437 | Studying metabolic networks is vital for many areas such as novel drugs and bio-fuels. For biologists, a key challenge is that many reactions are impractical or expensive to be found through experiments. Our task is to recover the missing reactions. By exploiting the problem structure, we model reaction recovery as a hyperlink prediction problem, where each reaction is regarded as a hyperlink connecting its participating vertices (metabolites). Different from the traditional link prediction problem where two nodes form a link, a hyperlink can involve an arbitrary number of nodes. Since the cardinality of a hyperlink is variable, existing classifiers based on a fixed number of input features become infeasible. Traditional methods, such as common neighbors and Katz index, are not applicable either, since they are restricted to pairwise similarities. In this paper, we propose a novel hyperlink prediction algorithm, called Matrix Boosting (MATBoost). MATBoost conducts inference jointly in the incidence space and adjacency space by performing an iterative completion-matching optimization. We carry out extensive experiments to show that MATBoost achieves state-of-the-art performance. For a metabolic network with 1805 metabolites and 2583 reactions, our algorithm can successfully recover nearly 200 reactions out of 400 missing reactions. | Although hyperlinks are common in real world and can be used to model multi-way relationships, currently there are still limited research on hyperlink prediction. @cite_23 proposed a supervised HPLSF framework to predict hyperlinks in social networks. To deal with the variable number of features, HPLSF uses their entropy score as a fixed-length feature for training a classification model. To our best knowledge, this is the only algorithm that is specifically designed for hyperlink prediction in arbitrary-cardinality hypernetworks. | {
"cite_N": [
"@cite_23"
],
"mid": [
"588318799"
],
"abstract": [
"Predicting the existence of links between pairwise objects in networks is a key problem in the study of social networks. However, relationships among objects are often more complex than simple pairwise relations. By restricting attention to dyads, it is possible that information valuable for many learning tasks can be lost. The hypernetwork relaxes the assumption that only two nodes can participate in a link, permitting instead an arbitrary number of nodes to participate in so-called hyperlinks or hyperedges, which is a more natural representation for complex, multi-party relations. However, the hyperlink prediction problem has yet to be studied. In this paper, we propose HPLSF (Hyperlink Prediction using Latent Social Features), a hyperlink prediction algorithm for hypernetworks. By exploiting the homophily property of social networks, HPLSF explores social features for hyperlink prediction. To handle the problem that social features are not always observable, a latent social feature learning scheme is developed. To cope with the arbitrary cardinality hyperlink issue in hypernetworks, we design a feature-embedding scheme to map the a priori arbitrarily-sized feature set associated with each hyperlink into a uniformly-sized auxiliary space. To address the fact that observed features and latent features may be not independent, we generalize a structural SVM to learn using both observed features and latent features. In experiments, we evaluate the proposed HPLSF framework on three large-scale hypernetwork datasets. Our results on the three diverse datasets demonstrate the effectiveness of the HPLSF algorithm. Although developed in the context of social networks, HPLSF is a general methodology and applies to arbitrary hypernetworks."
]
} |
1610.06447 | 2538817717 | This paper presents a unified framework for smooth convex regularization of discrete optimal transport problems. In this context, the regularized optimal transport turns out to be equivalent to a matrix nearness problem with respect to Bregman divergences. Our framework thus naturally generalizes a previously proposed regularization based on the Boltzmann-Shannon entropy related to the Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We call the regularized optimal transport distance the rot mover's distance in reference to the classical earth mover's distance. We develop two generic schemes that we respectively call the alternate scaling algorithm and the non-negative alternate scaling algorithm, to compute efficiently the regularized optimal plans depending on whether the domain of the regularizer lies within the non-negative orthant or not. These schemes are based on Dykstra's algorithm with alternate Bregman projections, and further exploit the Newton-Raphson method when applied to separable divergences. We enhance the separable case with a sparse extension to deal with high data dimensions. We also instantiate our proposed framework and discuss the inherent specificities for well-known regularizers and statistical divergences in the machine learning and information geometry communities. Finally, we demonstrate the merits of our methods with experiments using synthetic data to illustrate the effect of different regularizers and penalties on the solutions, as well as real-world data for a pattern recognition application to audio scene classification. | @cite_31 revisited the entropic regularization in a geometrical framework with iterative information projections. They showed that computing a Sinkhorn distance in dual form actually amounts to the minimization of a Kullback-Leibler divergence: Precisely, this amounts to computing the Kullback-Leibler projection of @math onto the transport polytope @math . In this context, the Sinkhorn-Knopp algorithm turns out to be a special instance of Bregman projection onto the intersection of convex sets via alternate projections. Specifically, we see @math as the intersection of the non-negative orthant with two affine subspaces containing all matrices with rows and columns summing to @math and @math respectively, and we alternate projection on these two subspaces according to the Kullback-Leibler divergence until convergence. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2036996178"
],
"abstract": [
"This article details a general numerical framework to approximate so-lutions to linear programs related to optimal transport. The general idea is to introduce an entropic regularization of the initial linear program. This regularized problem corresponds to a Kullback-Leibler Bregman di-vergence projection of a vector (representing some initial joint distribu-tion) on the polytope of constraints. We show that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form. This allows us to make use of iterative Bregman projections (when there are only equality constraints) or more generally Bregman-Dykstra iterations (when inequality constraints are in-volved). We illustrate the usefulness of this approach to several variational problems related to optimal transport: barycenters for the optimal trans-port metric, tomographic reconstruction, multi-marginal optimal trans-port and in particular its application to Brenier's relaxed solutions of in-compressible Euler equations, partial un-balanced optimal transport and optimal transport with capacity constraints."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | Learning-based techniques are a large group of methods which are establishing a relation between a feature space and human fixations. For example, @cite_44 uses a nonlinear transformation to associate image patches with human eye movement statistics. @cite_60 , a linear SVM classifier is used to establish a relation between three channels of low- (intensity, color, etc), mid- (horizon line) and high-level (faces and people) features and human eye movements in order to produce a saliency map. In a similar vein, @cite_1 employs multiple-instance learning. By learning a classifier, @cite_23 @cite_21 estimate the optimal weights for fusing several conspicuity maps from observers' eye movement data. These approaches often learn a probabilistic classifier to determine the probability of a feature being salient. Then, they employ the estimated saliency probability in order to build a saliency map. | {
"cite_N": [
"@cite_60",
"@cite_21",
"@cite_1",
"@cite_44",
"@cite_23"
],
"mid": [
"1510835000",
"1996095832",
"1976977741",
"2006201641",
"2151900481"
],
"abstract": [
"For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.",
"To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors.",
"Saliency detection has been a hot topic in recent years. Its popularity is mainly because of its theoretical meaning for explaining human attention and applicable aims in segmentation, recognition, etc. Nevertheless, traditional algorithms are mostly based on unsupervised techniques, which have limited learning ability. The obtained saliency map is also inconsistent with many properties of human behavior. In order to overcome the challenges of inability and inconsistency, this paper presents a framework based on multiple-instance learning. Low-, mid-, and high-level features are incorporated in the detection procedure, and the learning ability enables it robust to noise. Experiments on a data set containing 1000 images demonstrate the effectiveness of the proposed framework. Its applicability is shown in the context of a seam carving application.",
"The human visual system is foveated, that is, outside the central visual field resolution and acuity drop rapidly.Nonetheless much of a visual scene is perceived after only a few saccadic eye movements, suggesting an effectivestrategy for selecting saccade targets. It has been known for some time that local image structure at saccade targetsinfluences the selection process. However, the question of what the most relevant visual features are is still under debate.Here we show that center-surround patterns emerge as the optimal solution for predicting saccade targets from their localimage structure. The resulting model, a one-layer feed-forward network, is surprisingly simple compared to previouslysuggested models which assume much more complex computations such as multi-scale processing and multiple featurechannels. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiologicalhardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thoughtpreviously.Keywords: visual saliency, eye movements, receptive field analysis, classification images, kernel methods,support vector machines, natural scenesCitation: Kienzle, W., Franz, M. O., Scholkopf, B., & Wichmann, F. A. (2009). Center-surround patterns emerge as optimalpredictors for human saccade targets. Journal of Vision, 9(5):7, 1–15, http: journalofvision.org 9 5 7 , doi:10.1167 9.5.7.",
"Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | Ensembles of Deep Networks (eDN) @cite_39 adopts the neural filters learned during image classification task by deep neural networks and learns a classifier to perform fixation prediction. eDN can be considered an extension to @cite_60 in which the features are obtained from layers of a deep neural network. For each layer of the deep neural network, eDN first learns the optimal blend of the neural responses of all the previous layers and the current layer by a guided hyperparameter search. Then, it concatenates the optimal blend of all the layers to form a feature vector for learning a linear SVM classifier. | {
"cite_N": [
"@cite_60",
"@cite_39"
],
"mid": [
"1510835000",
"2078903912"
],
"abstract": [
"For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.",
"Saliency prediction typically relies on hand-crafted (multiscale) features that are combined in different ways to form a \"master\" saliency map, which encodes local image conspicuity. Recent improvements to the state of the art on standard benchmarks such as MIT1003 have been achieved mostly by incrementally adding more and more hand-tuned features (such as car or face detectors) to existing models. In contrast, we here follow an entirely automatic data-driven approach that performs a large-scale search for optimal features. We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency. Because of the high dimensionality of this parameter space, we use automated hyperparameter optimization to efficiently guide the search. The optimal blend of such multilayer features combined with a simple linear classifier achieves excellent performance on several image saliency benchmarks. Our models outperform the state of the art on MIT1003, on which features and classifiers are learned. Without additional training, these models generalize well to two other image saliency data sets, Toronto and NUSEF, despite their different image content. Finally, our algorithm scores best of all the 23 models evaluated to date on the MIT300 saliency challenge, which uses a hidden test set to facilitate an unbiased comparison."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | Deep Gaze I @cite_64 utilizes CNNs for the fixation prediction task by treating saliency prediction as point processing. Despite this model is justified differently than @cite_39 and @cite_60 , in practice, it boils down to the same framework. Nonetheless, the objective function to be minimized is slightly different due to the explicit incorporation of the center-bias factor and the imposed sparsity constraint in the framework. SalNet @cite_89 is another technique that employs a CNN-based architecture, where the last layer is a deconvolution. The first convolution layers are initialized by the VGG16 @cite_56 and the deconvolution is learnt by fine-tuning the architecture for fixation prediction. | {
"cite_N": [
"@cite_64",
"@cite_60",
"@cite_89",
"@cite_56",
"@cite_39"
],
"mid": [
"2964145162",
"1510835000",
"2952932416",
"1686810756",
"2078903912"
],
"abstract": [
"",
"For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.",
"The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Saliency prediction typically relies on hand-crafted (multiscale) features that are combined in different ways to form a \"master\" saliency map, which encodes local image conspicuity. Recent improvements to the state of the art on standard benchmarks such as MIT1003 have been achieved mostly by incrementally adding more and more hand-tuned features (such as car or face detectors) to existing models. In contrast, we here follow an entirely automatic data-driven approach that performs a large-scale search for optimal features. We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency. Because of the high dimensionality of this parameter space, we use automated hyperparameter optimization to efficiently guide the search. The optimal blend of such multilayer features combined with a simple linear classifier achieves excellent performance on several image saliency benchmarks. Our models outperform the state of the art on MIT1003, on which features and classifiers are learned. Without additional training, these models generalize well to two other image saliency data sets, Toronto and NUSEF, despite their different image content. Finally, our algorithm scores best of all the 23 models evaluated to date on the MIT300 saliency challenge, which uses a hidden test set to facilitate an unbiased comparison."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | Multiresolution CNN (Mr-CNN) @cite_76 designs a deep CNN-based technique to discriminate image patches centered on fixations from non-fixated image patches at multiple resolutions. It hence trains a convolutional neural network at each scale, which results in three parallel networks. The outputs of these networks are connected together through a common classification layer in order to learn the best resolution combination. | {
"cite_N": [
"@cite_76"
],
"mid": [
"1948843088"
],
"abstract": [
"It is believed that eye movements in free-viewing of natural scenes are directed by both bottom-up visual saliency and top-down visual factors. In this paper, we propose a novel computational framework to simultaneously learn these two types of visual features from raw image data using a multiresolution convolutional neural network (Mr-CNN) for predicting eye fixations. The Mr-CNN is directly trained from image regions centered on fixation and non-fixation locations over multiple resolutions, using raw image pixels as inputs and eye fixation attributes as labels. Diverse top-down visual features can be learned in higher layers. Meanwhile bottom-up visual saliency can also be inferred via combining information over multiple resolutions. Finally, optimal integration of bottom-up and top-down cues can be learned in the last logistic regression layer to predict eye fixations. The proposed approach achieves state-of-the-art results over four publically available benchmark datasets, demonstrating the superiority of our work."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | SALICON @cite_79 develops a model by fine-tuning the convolutional neural network, trained on ImageNet, using saliency evaluation metrics as objective functions. It feeds an image into a CNN architecture at two resolutions, coarse and fine. Then, the response of the last convolution layer is obtained for each scale. These responses are then concatenated together and are fed into a linear integration scheme, optimizing the Kullback-Leibler divergence between the network output and the ground-truth fixation maps in a regression setup. The error is back-propagated to the convolution layers for fine-tuning the network. | {
"cite_N": [
"@cite_79"
],
"mid": [
"2212216676"
],
"abstract": [
"Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set."
]
} |
1610.06449 | 2533058588 | This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members. | The proposed method can be considered a learning-based approach. While many of the learning-based techniques are essentially solving a classification problem, the proposed model has a regression ideology in mind. It is thus closer to the recent deep learning approaches that treat the problem as estimation of a probability map in terms of a regression problem @cite_89 @cite_79 @cite_97 . Nonetheless, it exploits an ensemble of extreme learning machines. | {
"cite_N": [
"@cite_79",
"@cite_97",
"@cite_89"
],
"mid": [
"2212216676",
"2442293398",
"2952932416"
],
"abstract": [
"Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.",
"Most saliency estimation methods aim to explicitly model low-level conspicuity cues such as edges or blobs and may additionally incorporate top-down cues using face or text detection. Data-driven methods for training saliency models using eye-fixation data are increasingly popular, particularly with the introduction of large-scale datasets and deep architectures. However, current methods in this latter paradigm use loss functions designed for classification or regression tasks whereas saliency estimation is evaluated on topographical maps. In this work, we introduce a new saliency map model which formulates a map as a generalized Bernoulli distribution. We then train a deep architecture to predict such maps using novel loss functions which pair the softmax activation function with measures designed to compute distances between probability distributions. We show in extensive experiments the effectiveness of such loss functions over standard ones on four public benchmark datasets, and demonstrate improved performance over state-of-the-art saliency methods.",
"The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction."
]
} |
1610.06510 | 2535151237 | We explore the use of segments learnt using Byte Pair Encoding (referred to as BPE units) as basic units for statistical machine translation between related languages and compare it with orthographic syllables, which are currently the best performing basic units for this translation task. BPE identifies the most frequent character sequences as basic units, while orthographic syllables are linguistically motivated pseudo-syllables. We show that BPE units outperform orthographic syllables as units of translation, showing up to 11 increase in BLEU scores. In addition, BPE can be applied to any writing system, while orthographic syllables can be used only for languages whose writing systems use vowel representations. We show that BPE units outperform word and morpheme level units for translation involving languages like Urdu, Japanese whose writing systems do not use vowels (either completely or partially). Across many language pairs, spanning multiple language families and types of writing systems, we show that translation with BPE segments outperforms orthographic syllables, especially for morphologically rich languages. | The first approach involves into the target languages. This can done by transliterating the untranslated words in a post-processing step @cite_33 @cite_32 , a technique generally used for handling named entities in SMT. However, transliteration candidates cannot be scored and tuned along with other features used in the SMT system. This limitation can be overcome by integrating the transliteration module into the decoder @cite_4 , so both translation and transliteration candidates can be evaluated and scored simultaneously. This also allows transliteration translation choices to be made. | {
"cite_N": [
"@cite_32",
"@cite_33",
"@cite_4"
],
"mid": [
"",
"2117717100",
"2139604620"
],
"abstract": [
"",
"We propose several techniques for improving statistical machine translation between closely-related languages with scarce resources. We use character-level translation trained on n-gram-character-aligned bitexts and tuned using word-level BLEU, which we further augment with character-based transliteration at the word level and combine with a word-level translation model. The evaluation on Macedonian-Bulgarian movie subtitles shows an improvement of 2.84 BLEU points over a phrase-based word-level baseline.",
"We present a novel approach to integrate transliteration into Hindi-to-Urdu statistical machine translation. We propose two probabilistic models, based on conditional and joint probability formulations, that are novel solutions to the problem. Our models consider both transliteration and translation when translating a particular Hindi word given the context whereas in previous work transliteration is only used for translating OOV (out-of-vocabulary) words. We use transliteration as a tool for disambiguation of Hindi homonyms which can be both translated or transliterated or transliterated differently based on different contexts. We obtain final BLEU scores of 19.35 (conditional probability model) and 19.00 (joint probability model) as compared to 14.30 for a baseline phrase-based system and 16.25 for a system which transliterates OOV words in the baseline system. This indicates that transliteration is useful for more than only translating OOV words for language pairs like Hindi-Urdu."
]
} |
1610.06510 | 2535151237 | We explore the use of segments learnt using Byte Pair Encoding (referred to as BPE units) as basic units for statistical machine translation between related languages and compare it with orthographic syllables, which are currently the best performing basic units for this translation task. BPE identifies the most frequent character sequences as basic units, while orthographic syllables are linguistically motivated pseudo-syllables. We show that BPE units outperform orthographic syllables as units of translation, showing up to 11 increase in BLEU scores. In addition, BPE can be applied to any writing system, while orthographic syllables can be used only for languages whose writing systems use vowel representations. We show that BPE units outperform word and morpheme level units for translation involving languages like Urdu, Japanese whose writing systems do not use vowels (either completely or partially). Across many language pairs, spanning multiple language families and types of writing systems, we show that translation with BPE segments outperforms orthographic syllables, especially for morphologically rich languages. | Recently, subword level models have also generated interest for neural machine translation (NMT) systems. The motivation is the need to limit the in encoder-decoder architectures @cite_8 . It is in this context that Byte Pair Encoding, a data compression method @cite_14 , was adapted to learn subword units for NMT @cite_17 . Other subword units for NMT have also been proposed: character @cite_6 , Huffman encoding based units @cite_0 , wordpieces @cite_2 @cite_27 . Our hypothesis is that such subword units learnt from corpora are particularly suited for translation between related languages. In this paper, we test this hypothesis by using BPE to learn subword units. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_17"
],
"mid": [
"46679369",
"2949888546",
"2311921240",
"",
"2525778437",
"2121879602",
"1816313093"
],
"abstract": [
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder-decoder with a subword-level encoder and a character-level decoder on four language pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.",
"",
"Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60 compared to Google's phrase-based production system.",
"This paper describes challenges and solutions for building a successful voice search system as applied to Japanese and Korean at Google. We describe the techniques used to deal with an infinite vocabulary, how modeling completely in the written domain for language model and dictionary can avoid some system complexity, and how we built dictionaries, language and acoustic models in this framework. We show how to deal with the difficulty of scoring results for multiple script languages because of ambiguities. The development of voice search for these languages led to a significant simplification of the original process to build a system for any new language which in in parts became our default process for internationalization of voice search.",
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively."
]
} |
1610.06402 | 2535465574 | The long-term memory of most connectionist systems lies entirely in the weights of the system. Since the number of weights is typically fixed, this bounds the total amount of knowledge that can be learned and stored. Though this is not normally a problem for a neural network designed for a specific task, such a bound is undesirable for a system that continually learns over an open range of domains. To address this, we describe a lifelong learning system that leverages a fast, though non-differentiable, content-addressable memory which can be exploited to encode both a long history of sequential episodic knowledge and semantic knowledge over many episodes for an unbounded number of domains. This opens the door for investigation into transfer learning, and leveraging prior knowledge that has been learned over a lifetime of experiences to new domains. | Other methods have been proposed to address catastrophic interference. For example, Complementary Learning Systems @cite_9 and Learning without Forgetting @cite_24 both interleave training of remembered earlier data with new data. We draw inspiration from both of these systems, and from work on Progressive Networks @cite_13 , which freezes weights of networks trained on earlier domains. Unlike the others, Progressive Networks allow a network to expand its capacity. Unlike our system, Progressive Networks do not attempt to store the semantic knowledge of earlier systems in a content addressable memory, and have the problem that their network grows quadratically in the number of domains. We hypothesize that storing semantic knowledge in a content addressable memory will help address this by allowing fast lookup of relevant program vectors'' potentially yielding linear storage and logarithmic program lookup. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_13"
],
"mid": [
"2949808626",
"2159345153",
"2426267443"
],
"abstract": [
"When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.",
"This paper reviews the fate of the central ideas behind the complementary learning systems (CLS) framework as originally articulated in McClelland, McNaughton, and O’Reilly (1995). This framework explains why the brain requires two differentially specialized learning and memory systems, and it nicely specifies their central properties (i.e., the hippocampus as a sparse, pattern-separated system for rapidly learning episodic memories, and the neocortex as a distributed, overlapping system for gradually integrating across episodes to extract latent semantic structure). We review the application of the CLS framework to a range of important topics, including the following: the basic neural processes of hippocampal memory encoding and recall, conjunctive encoding, human recognition memory, consolidation of initial hippocampal learning in cortex, dynamic modulation of encoding versus recall, and the synergistic interactions between hippocampus and neocortex. Overall, the CLS framework remains a vital theoretical force in the field, with the empirical data over the past 15 years generally confirming its key principles.",
"Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index."
]
} |
1610.06468 | 2951557894 | This paper explores a simple question: How would we provide a high-quality search experience on Mars, where the fundamental physical limit is speed-of-light propagation delays on the order of tens of minutes? On Earth, users are accustomed to nearly instantaneous response times from search engines. Is it possible to overcome orders-of-magnitude longer latency to provide a tolerable user experience on Mars? In this paper, we formulate the searching from Mars problem as a tradeoff between "effort" (waiting for responses from Earth) and "data transfer" (pre-fetching or caching data on Mars). The contribution of our work is articulating this design space and presenting two case studies that explore the effectiveness of baseline techniques, using publicly available data from the TREC Total Recall and Sessions Tracks. We intend for this research problem to be aspirational and inspirational - even if one is not convinced by the premise of Mars colonization, there are Earth-based scenarios such as searching from a rural village in India that share similar constraints, thus making the problem worthy of exploration and attention from researchers. | The problem of searching from Mars is intended to be aspirational as well as inspirational. Even if one remains unconvinced about interplanetary colonization in the short term, our work remains relevant in the same sense that zombie apocalypse preparations advocated by the Centers for Disease Control are instructive. http: www.cdc.gov phpr zombies.htm Like that effort, theoretical considerations about unlikely scenarios can lead to insights with more immediate impact. In fact, search from Mars can be thought of as a specific instantiation of what @cite_13 call slow search'', which aims to relax latency requirements for a potentially higher-quality search experience. Slow search explores latencies on the order of minutes to hours, which is similar to speed of light propagation delay to Mars. There is substantial precedent for our work, as we discuss below. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2169669347"
],
"abstract": [
"Significant time and effort has been devoted to reducing the time between query receipt and search engine response, and for good reason. Research suggests that even slightly higher retrieval latency by Web search engines can lead to dramatic decreases in users' perceptions of result quality and engagement with the search results. While users have come to expect rapid responses from search engines, recent advances in our understanding of how people find information suggest that there are scenarios where a search engine could take significantly longer than a fraction of a second to return relevant content. This raises the important question: What would search look like if search engines were not constrained by existing expectations for speed? In this paper, we explore slow search, a class of search where traditional speed requirements are relaxed in favor of a high quality search experience. Via large-scale log analysis and user surveys, we examine how individuals value time when searching. We confirm that speed is important, but also show that there are many search situations where result quality is more important. This highlights intriguing opportunities for search systems to support new search experiences with high quality result content that takes time to identify. Slow search has the potential to change the search experience as we know it."
]
} |
1610.06468 | 2951557894 | This paper explores a simple question: How would we provide a high-quality search experience on Mars, where the fundamental physical limit is speed-of-light propagation delays on the order of tens of minutes? On Earth, users are accustomed to nearly instantaneous response times from search engines. Is it possible to overcome orders-of-magnitude longer latency to provide a tolerable user experience on Mars? In this paper, we formulate the searching from Mars problem as a tradeoff between "effort" (waiting for responses from Earth) and "data transfer" (pre-fetching or caching data on Mars). The contribution of our work is articulating this design space and presenting two case studies that explore the effectiveness of baseline techniques, using publicly available data from the TREC Total Recall and Sessions Tracks. We intend for this research problem to be aspirational and inspirational - even if one is not convinced by the premise of Mars colonization, there are Earth-based scenarios such as searching from a rural village in India that share similar constraints, thus making the problem worthy of exploration and attention from researchers. | Technologies developed for search on Mars have potential applications closer to home in improving search from remote areas on Earth such as Easter Island, where only satellite Internet is available, and the Canadian Arctic, where Internet access remains prohibitively slow and expensive. Our work builds on previous efforts to enhance Internet access in developing regions such as rural India, where connectivity is poor and intermittent. @cite_21 explored web search over email, an interaction model that is not unlike searching from Mars. @cite_7 specifically tackle the problem of search over intermittent connections, attempting to optimize the amount of interaction that a single round of downloading can enable. Intermittent connections can be modeled as high latency, which makes the problem quite similar to ours---and indeed use some of the query expansion and pre-fetching techniques we explore here. | {
"cite_N": [
"@cite_21",
"@cite_7"
],
"mid": [
"81009096",
"2137729515"
],
"abstract": [
"The Internet has the potential to deliver information to communities around the world that have no other information resources. High telephone and ISP fees in combination with lowbandwidth connections make it unaffordable for many people to browse the Web online. We are developing the TEK system to enable users to search the Web using only email. TEK stands for \"Time Equals Knowledge,\" since the user exchanges time (waiting for email) for knowledge. The system contains three components: 1) the client, which provides a graphical interface for the end user, 2) the server, which performs the searches from MIT, and 3) a reliable email-based communication protocol between the client and the server. The TEK search engine differs from others in that it is designed to return low-bandwidth results, which are achieved by special filtering, analysis, and compression on the server side. We believe that TEK will bring Web resources to people who otherwise would not be able to afford them.",
"The majority of people in rural developing regions do not have access to the World Wide Web. Traditional network connectivity technologies have proven to be prohibitively expensive in these areas. The emergence of new long-range wireless technologies provide hope for connecting these rural regions to the Internet. However, the network connectivity provided by these new solutions are by nature intermittent due to high network usage rates, frequent power-cuts and the use of delay tolerant links. Typical applications, especially interactive applications like web search, do not tolerate intermittent connectivity. In this paper, we present the design and implementation of RuralCafe, a system intended to support efficient web search over intermittent networks. RuralCafe enables users to perform web search asynchronously and find what they are looking for in one round of intermittency as opposed to multiple rounds of search downloads. RuralCafe does this by providing an expanded search query interface which allows a user to specify additional query terms to maximize the utility of the results returned by a search query. Given knowledge of the limited available network resources, RuralCafe performs optimizations to prefetch pages to best satisfy a search query based on a user's search preferences. In addition, RuralCafe does not require modifications to the web browser, and can provide single round search results tailored to various types of networks and economic constraints. We have implemented and evaluated the effectiveness of RuralCafe using queries from logs made to a large search engine, queries made by users in an intermittent setting, and live queries from a small testbed deployment. We have also deployed a prototype of RuralCafe in Kerala, India."
]
} |
1610.06468 | 2951557894 | This paper explores a simple question: How would we provide a high-quality search experience on Mars, where the fundamental physical limit is speed-of-light propagation delays on the order of tens of minutes? On Earth, users are accustomed to nearly instantaneous response times from search engines. Is it possible to overcome orders-of-magnitude longer latency to provide a tolerable user experience on Mars? In this paper, we formulate the searching from Mars problem as a tradeoff between "effort" (waiting for responses from Earth) and "data transfer" (pre-fetching or caching data on Mars). The contribution of our work is articulating this design space and presenting two case studies that explore the effectiveness of baseline techniques, using publicly available data from the TREC Total Recall and Sessions Tracks. We intend for this research problem to be aspirational and inspirational - even if one is not convinced by the premise of Mars colonization, there are Earth-based scenarios such as searching from a rural village in India that share similar constraints, thus making the problem worthy of exploration and attention from researchers. | In this work, we assume that a functional interplanetary Internet already exists, and that the only problem we need to overcome is latency at the application layer. This is not an unrealistic assumption as other researchers have been exploring high-latency network links in the context of what is known as delay-tolerant networking (see, for example, IETF RFC 4838 https: tools.ietf.org html rfc4838 ) and NASA has already begun experimental deployments on the International Space Station. http: www.nasa.gov mission_pages station research experiments 730.html Once again, there are many similarities between building interplanetary connectivity and enhancing connectivity in developing regions. Examples of the latter include DakNet @cite_0 , deploying wifi access points on buses to provide intermittent connectivity to users along their routes and the work of @cite_15 to ferry data using mechanical backhaul (i.e., sneakernet)---which isn't very different from our proposal to put a cache of the web on a Mars-bound rocket (more details later). | {
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2151342861",
"2161147255"
],
"abstract": [
"DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions.",
"Rural kiosks in developing countries provide a variety of services such as birth, marriage, and death certificates, electricity bill collection, land records, email services, and consulting on medical and agricultural problems. Fundamental to a kiosk's operation is its connection to the Internet. Network connectivity today is primarily provided by dialup telephone, although Very Small Aperture Terminals (VSAT) or long-distance wireless links are also being deployed. These solutions tend to be both expensive and failure prone. Instead, we propose the use of buses and cars as \"mechanical backhaul\" devices to carry data to and from a village and an internet gateway. Building on the pioneering lead of Daknet [15], and extending the Delay Tolerant Networking Research Group architecture [24], we describe a comprehensive solution, encompassing naming, addressing, forwarding, routing, identity management, application support, and security. We believe that this architecture not only meets the top-level goals of low cost and robustness, but also exposes fundamental architectural principles necessary for any such design. We also describe our experiences in implementing a prototype of this architecture."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | A remarkable early stereo SLAM system was the work of @cite_10 . Based on Conditionally Independent Divide and Conquer EKF-SLAM it was able to operate in larger environments than other approaches at that time. Most importantly, it was the first stereo SLAM exploiting both close and far points (i.e. points whose depth cannot be reliably estimated due to little disparity in the stereo camera), using an inverse depth parametrization @cite_4 for the latter. They empirically showed that points can be reliably triangulated if their depth is less than @math times the stereo baseline. In this work we follow this strategy of treating in a different way and points, as explained in Section . | {
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2101648351",
"2118428504"
],
"abstract": [
"In this paper, we describe a system that can carry out simultaneous localization and mapping (SLAM) in large indoor and outdoor environments using a stereo pair moving with 6 DOF as the only sensor. Unlike current visual SLAM systems that use either bearing-only monocular information or 3-D stereo information, our system accommodates both monocular and stereo. Textured point features are extracted from the images and stored as 3-D points if seen in both images with sufficient disparity, or stored as inverse depth points otherwise. This allows the system to map both near and far features: the first provide distance and orientation, and the second provide orientation information. Unlike other vision-only SLAM systems, stereo does not suffer from ldquoscale driftrdquo because of unobservability problems, and thus, no other information such as gyroscopes or accelerometers is required in our system. Our SLAM algorithm generates sequences of conditionally independent local maps that can share information related to the camera motion and common features being tracked. The system computes the full map using the novel conditionally independent divide and conquer algorithm, which allows constant time operation most of the time, with linear time updates to compute the full map. To demonstrate the robustness and scalability of our system, we show experimental results in indoor and outdoor urban environments of 210 m and 140 m loop trajectories, with the stereo camera being carried in hand by a person walking at normal walking speeds of 4--5 km h.",
"We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | The recent Stereo LSD-SLAM of @cite_3 is a semi-dense direct approach that minimizes photometric error in image regions with high gradient. Not relying on features, the method is expected to be more robust to motion blur or poorly-textured environments. However as a direct method its performance can be severely degraded by unmodeled effects like rolling shutter or non-lambertian reflectance. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2218842719"
],
"abstract": [
"We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. In contrast to sparse interest-point based methods, our approach aligns images directly based on the photoconsistency of all high-contrast pixels, including corners, edges and high texture areas. It concurrently estimates the depth at these pixels from two types of stereo cues: Static stereo through the fixed-baseline stereo camera setup as well as temporal multi-view stereo exploiting the camera motion. By incorporating both disparity sources, our algorithm can even estimate depth of pixels that are under-constrained when only using fixed-baseline stereo. Using a fixed baseline, on the other hand, avoids scale-drift that typically occurs in pure monocular SLAM.We furthermore propose a robust approach to enforce illumination invariance, capable of handling aggressive brightness changes between frames - greatly improving the performance in realistic settings. In experiments, we demonstrate state-of-the-art results on stereo SLAM benchmarks such as Kitti or challenging datasets from the EuRoC Challenge 3 for micro aerial vehicles."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | One of the earliest and most famed RGB-D SLAM systems was the KinectFusion of @cite_15 . This method fused all depth data from the sensor into a volumetric dense model that is used to track the camera pose using ICP. This system was limited to small workspaces due to its volumetric representation and the lack of loop closing. Kintinuous by @cite_6 was able to operate in large environments by using a rolling cyclical buffer and included loop closing using place recognition and pose graph optimization. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"1987648924",
"2143769815"
],
"abstract": [
"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.",
"We present a new simultaneous localization and mapping SLAM system capable of producing high-quality globally consistent surface reconstructions over hundreds of meters in real time with only a low-cost commodity RGB-D sensor. By using a fused volumetric surface reconstruction we achieve a much higher quality map over what would be achieved using raw RGB-D point clouds. In this paper we highlight three key techniques associated with applying a volumetric fusion-based mapping system to the SLAM problem in real time. First, the use of a GPU-based 3D cyclical buffer trick to efficiently extend dense every-frame volumetric fusion of depth maps to function over an unbounded spatial region. Second, overcoming camera pose estimation limitations in a wide variety of environments by combining both dense geometric and photometric camera pose constraints. Third, efficiently updating the dense map according to place recognition and subsequent loop closure constraints by the use of an 'as-rigid-as-possible' space deformation. We present results on a wide variety of aspects of the system and show through evaluation on de facto standard RGB-D benchmarks that our system performs strongly in terms of trajectory estimation, map quality and computational performance in comparison to other state-of-the-art systems."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | Probably the first popular open-source system was the RGB-D SLAM of @cite_13 . This is a feature-based system, whose front-end computes frame-to-frame motion by feature matching and ICP. The back-end performs pose-graph optimization with loop closure constraints from a heuristic search. Similarly the back-end of DVO-SLAM by @cite_9 optimizes a pose-graph where keyframe-to-keyframe constraints are computed from a visual odometry that minimizes both photometric and depth error. DVO-SLAM also searches for loop candidates in a heuristic fashion over all previous frames, instead of relying on place recognition. | {
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"2064451896",
"2069479606"
],
"abstract": [
"In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source.",
"In this paper, we present a novel mapping system that robustly generates highly accurate 3-D maps using an RGB-D camera. Our approach requires no further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners, as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3-D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast camera motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open source and has already been widely adopted by the robotics community."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | The recent ElasticFusion of @cite_20 builds a surfel-based map of the environment. This is a map-centric approach that forget poses and performs loop closing applying a non-rigid deformation to the map, instead of a standard pose-graph optimization. The detailed reconstruction and localization accuracy of this system is impressive, but the current implementation is limited to room-size maps as the complexity scales with the number of surfels in the map. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2527142681"
],
"abstract": [
"We present a novel approach to real-time dense visual simultaneous localisation and mapping. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimization or any post-processing steps. This is accomplished by using dense frame-to-model camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimizations as often as possible to stay close to the mode of the map distribution, while utilizing global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoo..."
]
} |
1610.06475 | 2535547924 | We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields. | As proposed by @cite_8 our ORB-SLAM2 uses depth information to synthesize a stereo coordinate for extracted features on the image. This way our system is agnostic of the input being stereo or RGB-D. Differently to all above methods our back-end is based on bundle adjustment and builds a globally consistent sparse reconstruction. Therefore our method is lightweight and works with standard CPUs. Our goal is long-term and globally consistent localization instead of building the most detailed dense reconstruction. However from the highly accurate keyframe poses one could fuse depth maps and get accurate reconstruction on-the-fly in a local area or post-process the depth maps from all keyframes after a full BA and get an accurate 3D model of the whole scene. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2154280780"
],
"abstract": [
"We present a novel and general optimisation framework for visual SLAM, which scales for both local, highly accurate reconstruction and large-scale motion with long loop closures. We take a two-level approach that combines accurate pose-point constraints in the primary region of interest with a stabilising periphery of pose-pose soft constraints. Our algorithm automatically builds a suitable connected graph of keyposes and constraints, dynamically selects inner and outer window membership and optimises both simultaneously. We demonstrate in extensive simulation experiments that our method approaches the accuracy of offline bundle adjustment while maintaining constant-time operation, even in the hard case of very loopy monocular camera motion. Furthermore, we present a set of real experiments for various types of visual sensor and motion, including large scale SLAM with both monocular and stereo cameras, loopy local browsing with either monocular or RGB-D cameras, and dense RGB-D object model building."
]
} |
1610.06592 | 2543088081 | The defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Holder continuous curves with finitely many crossings. | We note that @cite_20 and @cite_7 also address the existence and regularity of minimizing pairs @math , yet without making use of the harmonic map formulation. The question of existence and regularity for the Ericksen model with general material constants was also addressed in @cite_9 . See also @cite_15 for a generalization of the techniques in @cite_10 and @cite_21 to the context of energy minimizing maps into more general Lipschitz targets. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2063041321",
"2056172813",
"1968731434",
"2739191920",
"2065357863",
"2058578329"
],
"abstract": [
"We prove regularity of minimizers of the functional (F(s, u) = ( | s |^2 + s^2 | u |^2 + (s) ) dx ) recently suggested by Ericksen [10] for the statics of nematic liquid crystals. We show that, given locally minimizing pairs (s, u),s has a continuous representative, ands, u are smooth outside the set s=0 . The proof relies upon higher integrability estimates, monotonicity, and decay lemmas.",
"We consider a new mathematical model for nematic liquid crystal proposed by J. Ericksen using Landau’s order parameter approach. In this model, the static configuration of liquid crystal can be described by a map minimizing certain degenerate variational integral. Here we prove that minimizers exist and are Holder continuous.",
"Energy minimizing static configurations of nematic liquid crystals with variable degree of orientation can be described by a map which minimizes certain degenerate variational integrals. This mathematical model was recently posed by J. L. Erickson. In this paper, we shall prove the existence, local uniqueness, and global regularity of minimizers. We shall also estimate the size of possible defects.",
"",
"This is a summary of some recent mathematical advances in the theory of defects in nematic liquid crystals. It also includes some new results concerning disclinations (line defects) which have not been published elsewhere.",
"We prove existence of minimizers of the functional ( ( | s |^2 + s^2 | u |^2 + (s) ) dx ) recently suggested by Ericksen [8] for the statics of nematic liquid crystals. A set of necessary conditions for the minimizers and a monotonicity formula are also found."
]
} |
1610.06592 | 2543088081 | The defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Holder continuous curves with finitely many crossings. | In addition, using a dimension reduction argument based on the monotonicity of Almgren frequency, the following Hausdorff dimension estimates were proved in @cite_10 , @cite_21 for nontrivial minimizing maps: @math for @math and @math for @math . By @cite_26 , the first estimate is sharp, as occur in this case. On the other hand, the second estimate was improved in @cite_8 . By proving that there are no homogeneous, energy minimizing maps depending only on two variables in this case, it was shown that there cannot be any such tangent maps either. Hence, by the dimension reduction argument in @cite_10 , @cite_21 , the defect set must consist of isolated points in the case @math . | {
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_26",
"@cite_8"
],
"mid": [
"1968731434",
"2065357863",
"2060284453",
"2005414009"
],
"abstract": [
"Energy minimizing static configurations of nematic liquid crystals with variable degree of orientation can be described by a map which minimizes certain degenerate variational integrals. This mathematical model was recently posed by J. L. Erickson. In this paper, we shall prove the existence, local uniqueness, and global regularity of minimizers. We shall also estimate the size of possible defects.",
"This is a summary of some recent mathematical advances in the theory of defects in nematic liquid crystals. It also includes some new results concerning disclinations (line defects) which have not been published elsewhere.",
"In this paper we represent the degree of microscopic order through a scalar-valued function, the degree of orientation, which vanishes where the fluid becomes isotropic",
""
]
} |
1610.06592 | 2543088081 | The defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Holder continuous curves with finitely many crossings. | The existence, uniqueness and regularity theory for @math -valued energy minimizing maps carry over veribatim to the case of @math -valued energy minimizing maps. However, an important difference is that simple closed geodesics, i.e. great circles of length @math , are not contractible in @math , unlike those in @math . While @math -valued maps minimizing the Dirichlet or Oseen-Frank energies do not differ from their @math -valued counterparts in terms of the size of their singular sets or their asymptotic behavior near them (cf. @cite_18 and @cite_19 ), the nontrivial topology of @math does have an effect in the context of Ericksen's variable degree of orientation model. In particular, one observes line defects for @math -valued energy minimizing maps in the case @math , @math . See Remark for an example. In particular, the Hausdorff dimension estimate @math is optimal for @math -valued energy minimizing maps, in contrast to the case of @math -valued ones. | {
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"2084946996",
"2035427049"
],
"abstract": [
"We establish the existence and partial regularity for solutions of some boundary-value problems for the static theory of liquid crystals. Some related problems involving magnetic or electric fields are also discussed.",
"Two problems concerning maps ϕ with point singularities from a domain Ω C ℝ3 toS2 are solved. The first is to determine the minimum energy of ϕ when the location and topological degree of the singularities are prescribed. In the second problem Ω is the unit ball and ϕ=g is given on ∂Ω; we show that the only cases in whichg(x |x|) minimizes the energy isg=const org(x)=±Rx withR a rotation. Extensions of these problems are also solved, e.g. points are replaced by “holes,” ℝ3,S2 is replaced by ℝ N ,SN−1 or by ℝ N , ℝPN−1, the latter being appropriate for the theory of liquid crystals."
]
} |
1610.06592 | 2543088081 | The defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Holder continuous curves with finitely many crossings. | See also @cite_24 for an extensive discussion on the use of @math in modeling uniaxial nematics, relations with the Landau-de Gennes theory, issues of suitable function spaces, orientability of line fields, and boundary conditions. For another result on line defects in liquid crystals, see @cite_23 , which considers the vanishing elastic constant limit in a Landau-de Gennes model, in the spirit of the Ginzburg-Landau theory. | {
"cite_N": [
"@cite_24",
"@cite_23"
],
"mid": [
"1972001314",
"1601239706"
],
"abstract": [
"Uniaxial nematic liquid crystals are modelled in the Oseen–Frank theory through a unit vector field n. This theory has the apparent drawback that it does not respect the head-to-tail symmetry in which n should be equivalent to −n. This symmetry is preserved in the constrained Landau–de Gennes theory that works with the tensor ( Q=s (n n- 1 3 Id ) ). We study the differences and the overlaps between the two theories. These depend on the regularity class used as well as on the topology of the underlying domain. We show that for simply-connected domains and in the natural energy class W1,2 the two theories coincide, but otherwise there can be differences between the two theories, which we identify. In the case of planar domains with holes and various boundary conditions, for the simplest form of the energy functional, we completely characterise the instances in which the predictions of the constrained Landau–de Gennes theory differ from those of the Oseen–Frank theory.",
"We consider the Landau-de Gennes variational model for nematic liquid crystals, in three-dimensional domains. More precisely, we study the asymptotic behaviour of minimizers as the elastic constant tends to zero, under the assumption that minimizers are uniformly bounded and their energy blows up as the logarithm of the elastic constant. We show that there exists a closed set ( S _ line ) of finite length, such that minimizers converge to a locally harmonic map away from ( S _ line ). Moreover, ( S _ line ) restricted to the interior of the domain is a locally finite union of straight line segments. We provide sufficient conditions, depending on the domain and the boundary data, under which our main results apply. We also discuss some examples."
]
} |
1610.06754 | 2543481186 | In this work, we argue that current state-of-the-art methods of aircraft localization such as multilateration are insufficient, in particular for modern crowdsourced air traffic networks with random, unplanned deployment geometry. We propose an alternative, a grid-based localization approach using the k-Nearest Neighbor algorithm, to deal with the identified shortcomings. Our proposal does not require any changes to the existing air traffic protocols and transmitters, and is easily implemented using only low-cost, commercial-off-the-shelf hardware. Using an algebraic multilateration algorithm for comparison, we evaluate our approach using real-world flight data collected with our collaborative sensor network OpenSky. We quantify its effectiveness in terms of aircraft location accuracy, surveillance coverage, and the verification of false position data. Our results show that the grid-based approach can increase the effective air traffic surveillance coverage compared to multilateration by a factor of up to 2.5. As it does not suffer from dilution of precision, it is much more robust in noisy environments and performs better in pre-existing, unplanned receiver deployments. We further find that the mean aircraft location accuracy can be increased by up to 41 in comparison with multilateration while also being able to pinpoint the origin of potential spoofing attacks conducted from the ground. | Indoor and outdoor localization problems have been studied extensively in the literature, often in the scope of sensor networks and radar applications. , @cite_14 give an overview of the techniques used in wireless indoor positioning including the different algorithms (k-Nearest Neighbor, lateration, least squares and Bayesian among others) and primitives such as received signal strength (RSS), TDoA, time of arrival (ToA) and angle of arrival (AoA). RSS-based methods are the most popular within any type of wireless networks as they are often readily supported out of the box and do not require additional hardware such as high precision clocks or antenna arrays. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2100989187"
],
"abstract": [
"Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented."
]
} |
1610.06754 | 2543481186 | In this work, we argue that current state-of-the-art methods of aircraft localization such as multilateration are insufficient, in particular for modern crowdsourced air traffic networks with random, unplanned deployment geometry. We propose an alternative, a grid-based localization approach using the k-Nearest Neighbor algorithm, to deal with the identified shortcomings. Our proposal does not require any changes to the existing air traffic protocols and transmitters, and is easily implemented using only low-cost, commercial-off-the-shelf hardware. Using an algebraic multilateration algorithm for comparison, we evaluate our approach using real-world flight data collected with our collaborative sensor network OpenSky. We quantify its effectiveness in terms of aircraft location accuracy, surveillance coverage, and the verification of false position data. Our results show that the grid-based approach can increase the effective air traffic surveillance coverage compared to multilateration by a factor of up to 2.5. As it does not suffer from dilution of precision, it is much more robust in noisy environments and performs better in pre-existing, unplanned receiver deployments. We further find that the mean aircraft location accuracy can be increased by up to 41 in comparison with multilateration while also being able to pinpoint the origin of potential spoofing attacks conducted from the ground. | Overall, the main (distributed) localization approach used within aviation is MLAT @cite_11 , which we discuss in detail in the next section and which provides the baseline for our evaluations. To the best of our knowledge the combination of the k-NN algorithm with TDoA as a primitive has not been studied. In the following, we argue that this combination is beneficial in particular for crowdsourced networks with random, imperfect system geometry in which the performance of MLAT suffers strongly. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2118756516"
],
"abstract": [
"Air traffic is continuously increasing worldwide, with both manned and unmanned aircraft looking to coexist in the same airspace in the future. Next generation air traffic management systems are crucial in successfully handling this growth and improving the safety of billions of future passengers. The Automatic Dependent Surveillance Broadcast (ADS-B) system is a core part of this future. Unlike traditional radar systems, this technology empowers aircraft to automatically broadcast their locations and intents, providing enhanced situational awareness. This article discusses important issues with the current state of ADS-B as it is being rolled out. We report from our OpenSky sensor network in Central Europe, which is able to capture about 30 percent of the European commercial air traffic. We analyze the 1090 MHz communication channel to understand the current state and its behavior under the increasing traffic load. Furthermore, the article considers important security challenges faced by ADS-B. Our insights are intended to help identify open research issues, furthering new interest and developments in this field."
]
} |
1610.06703 | 2541200886 | The explosive growth of Web 2.0, which was characterized by the creation of online social networks, has reignited the study of factors that could help us understand the growth and dynamism of these networks. Various generative network models have been proposed, including the Barabasi-Albert and Watts-Strogatz models. In this study, we revisit the problem from a perspective that seeks to compare results obtained from these generative models with those from real networks. To this end, we consider the dating network Skout Inc. An analysis is performed on the topological characteristics of the network that could explain the creation of new network links. Afterwards, the results are contrasted with those obtained from the Barabasi-Albert and Watts-Strogatz generative models. We conclude that a key factor that could explain the creation of links originates in its cluster structure, where link recommendations are more precise in Watts-Strogatz segmented networks than in Barabasi-Albert hierarchical networks. This result reinforces the need to establish more and better network segmentation algorithms that are capable of clustering large networks precisely and efficiently. | Regarding network segmentation methods, there has been recent interest in the exploration of spectral clustering methods for link prediction. @cite_1 explored the use of segmented networks for link prediction, proving in synthetic networks that this approach is feasible. The study is close in aim to our article but it does not consider an evaluation regarding evolving graph models. In addition the comparison is constrained only to the top-1 recommended node. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1966879929"
],
"abstract": [
"Link prediction in protein-protein interaction networks (PPINs) is an important task in biology, since the vast majority of biological functions involve such protein interactions. Link prediction is also important for online social networks (OSNs), which provide predictions about who is a friend of whom. Many link prediction methods for PPINs OSNs are local-based and do not exploit all network structure, which limits prediction accuracy. On the other hand, there are global approaches to detect the overall path structure in a network, being computationally prohibitive for huge-size PPINs OSNs. In this paper, we enhance a previously proposed multi-way spectral clustering method by introducing new ways to capture node proximity in both PPINs OSNs. Our new enhanced method uses information obtained from the top few eigenvectors of the normalized Laplacian matrix. As a result, it produces a less noisy matrix, which is smaller and more compact than the original one. In this way, we are able to provide faster and more accurate link predictions. Moreover, our new spectral clustering model is based on the well-known Bray-Curtis coefficient to measure proximity between two nodes. Compared to traditional clustering algorithms, such as k-means and DBSCAN, which assume globular (convex) regions in Euclidean space, our approach is more flexible in capturing the non-connected components of a social graph and a wider range of cluster geometries. We perform an extensive experimental comparison of the proposed method against existing link prediction algorithms and k-means algorithm, using two synthetic data sets, three real social networks and three real human protein data sets. Our experimental results show that our SpectralLink algorithm outperforms the local approaches, the k-means algorithm and another spectral clustering method in terms of effectiveness, whereas it is more efficient than the global approaches."
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | The lifting strategy to linearize the bilinear blind deconvolution problem was proposed in @cite_10 , and it was rigorously shown that two convolved vectors in @math can be separated blindly if their @math , and @math dimensional subspaces are known and one of the subspace is generic and the other is incoherent in the Fourier domain. It is further shown using the dual certificate approach in low-rank matrix recovery literature @cite_28 @cite_6 @cite_13 that both the vectors can be deconvolved exactly when @math . This paper extends the single input blind deconvolution result to multiple diverse inputs, where we observe the convolutions of @math vectors with known subspaces with a fixed vector only known to be sparse in some known basis. | {
"cite_N": [
"@cite_28",
"@cite_10",
"@cite_13",
"@cite_6"
],
"mid": [
"2611328865",
"2140867429",
"2120872934",
""
],
"abstract": [
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys @math for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.",
"We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for “generic” signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L ≳ N + K, to within constant and log factors.",
"This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Candes and Recht (2009), Candes and Tao (2009), and (2009). The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.",
""
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | A natural question that arises is whether multiple ( @math ) inputs @math 's are necessary in our problem to identify @math in . The answer is no in this specific case as even in the single input case @math , under the same random subspace assumption on @math , and replacing the nuclear norm in with the standard @math norm (sum of absolute entries) will separate @math , and @math , however, the sample complexity @math will be suboptimal, and of the order of @math to within log factors. In the general single input case; under no random subspace assumption on @math , it is shown in @cite_23 that @math , and @math are not identifiable from @math . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2014814922"
],
"abstract": [
"Identifiability is a key concern in ill-posed blind deconvolution problems arising in wireless communications and image processing. The single channel version of the problem is the most challenging and there have been efforts to use sparse models for regularizing the problem. Identifiability of the sparse blind deconvolution problem is analyzed and it is established that a simple sparsity assumption in the canonical basis is insufficient for unique recovery; a surprising negative result. The proof technique involves lifting the deconvolution problem into a rank one matrix recovery problem and analyzing the rank two nullspace of the resultant linear operator. A DoF (degrees of freedom) wise tight parametrized subset of this rank two null-space is constructed to establish the results."
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | Another relevant result is blind deconvolution plus demixing @cite_9 , where one observes sum of @math different convolved pairs of @math -dimensional vectors lying in @math , and @math dimensional known subspaces; one of which is generic and the other is incoherent in the Fourier domain. Each generic basis is chosen independently of others. The blind deconvolution plus the demixing problem is again cast as a rank- @math matrix recovery problem. The algorithm is successful when @math . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2962818938"
],
"abstract": [
"Suppose that we have @math sensors and each one intends to send a function @math (e.g., a signal or an image) to a receiver common to all @math sensors. During transmission, each @math gets convolved with a function @math . The receiver records the function @math , given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals @math and the blurring functions @math from just one received signal @math ? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this, otherwise, highly ill-posed problem and recover the @math transmitted functions @math and the impulse responses @math in a robust, reliable, and efficient manner, from just one single received function @math by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually suboptimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom."
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | An important recent article from the same group settles the recovery guarantee for a regularized gradient descent algorithm for blind deconvolution, in the single-input case and with the scaling @math @cite_29 . This result, however, makes the assumption of a fixed subspace for the sparse impulse response. Note that gradient descent algorithms are expected to have much more favorable runtimes than semidefinite programming, when their basin of attraction can be established to be wide enough, as in @cite_29 . | {
"cite_N": [
"@cite_29"
],
"mid": [
"2430435693"
],
"abstract": [
"We study the question of reconstructing two signals @math and @math from their convolution @math . This problem, known as blind deconvolution , pervades many areas of science and technology, including astronomy, medical imaging, optics, and wireless communications. A key challenge of this intricate non-convex optimization problem is that it might exhibit many local minima. We present an efficient numerical algorithm that is guaranteed to recover the exact solution, when the number of measurements is (up to log-factors) slightly larger than the information-theoretical minimum, and under reasonable conditions on @math and @math . The proposed regularized gradient descent algorithm converges at a geometric rate and is provably robust in the presence of noise. To the best of our knowledge, our algorithm is the first blind deconvolution algorithm that is numerically efficient, robust against noise, and comes with rigorous recovery guarantees under certain subspace conditions. Moreover, numerical experiments do not only provide empirical verification of our theory, but they also demonstrate that our method yields excellent performance even in situations beyond our theoretical framework."
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | The multichannel blind deconvolution was first modeled as a rank-1 recovery problem in @cite_18 and the experimental results show the successful joint recovery of Gaussian channel responses with known support that are fed with a single Gaussian noise source. Other interesting works include @cite_22 @cite_25 , where a least squares method is proposed. The approach is deterministic in the sense that the input statistics are not assumed to be known though the channel subspaces are known. Some of the results with various assumptions on input statistics can be found in @cite_17 . Owing to the importance of the blind deconvolution problem, an expansive literature is available and the discussion here cannot possibly cover all the related material, however, an interested reader might start with the some nice survey articles @cite_24 @cite_26 @cite_0 and the references therein. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_24",
"@cite_0",
"@cite_25",
"@cite_17"
],
"mid": [
"2012163634",
"2043864462",
"",
"2161804069",
"",
"2100432131",
"2137358528"
],
"abstract": [
"We introduce a new algorithm for multichannel blind deconvolution. Given the outputs of K linear time- invariant channels driven by a common source, we wish to recover their impulse responses without knowledge of the source signal. Abstractly, this problem amounts to finding a solution to an overdetermined system of quadratic equations. We show how we can recast the problem as solving a system of underdetermined linear equations with a rank constraint. Recent results in the area of low rank recovery have shown that there are effective convex relaxations to problems of this type that are also scalable computationally, allowing us to recover 100s of channel responses after a moderate observation time. We illustrate the effectiveness of our methodology with a numerical simulation of a passive noise imaging\" experiment.",
"Abstract Since (1991) demonstrated the feasibility of identifying possibly nonminimum phase channels using second-order statistics, considerable research activity, both in algorithm development and fundamental analysis, has been seen in the area of blind identification of multiple FIR channels. Many of the recently developed approaches invoke, either explicitly or implicitly, the algebraic structure of the data model, while some others resort to the use of cyclic correlation spectral fitting techniques. The objective of this paper is to establish insightful connections among these studies and present recent developments of blind channel equalization. We also unify various representative algorithms into a common theoretical framework.",
"",
"Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. We show that, using reasonable image priors, a naive simulations MAP estimation of both latent image and blur kernel is guaranteed to fail even with infinitely large images sampled from the prior. On the other hand, we show that since the kernel size is often smaller than the image size, a MAP estimation of the kernel alone is well constrained and is guaranteed to succeed to recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. As a first step toward this experimental evaluation, we have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrate that the shift-invariant blur assumption made by most algorithms is often violated.",
"",
"A new algorithm is proposed for the deconvolution of an unknown, possibly colored, Gaussian or nonstationary signal that is observed through two or more unknown channels described by rational system transfer functions. More specifically, not only the root (pole and zero) locations but also the orders of the channel transfer functions are unknown. It is assumed that the channel orders may be overestimated. The proposed algorithm estimates the orders and root locations of the channel transfer functions, therefore it can also be used in multichannel system identification problems. The input signal is allowed to be nonstationary and the channel transfer functions may be a nonminimum phase as well as noncausal, hence the proposed algorithm is particularly suitable for applications such as dereverberation of speech signals recorded through multiple microphones. Several experimental results indicate improvement compared to the existing methods in the literature. >",
"A new blind channel identification and equalization method is proposed that exploits the cyclostationarity of oversampled communication signals to achieve identification and equalization of possibly nonminimum phase (multipath) channels without using training signals. Unlike most adaptive blind equalization methods for which the convergence properties are often problematic, the channel estimation algorithm proposed here is asymptotically ex-set. Moreover, since it is based on second-order statistics, the new approach may achieve equalization with fewer symbols than most techniques based only on higher-order statistics. Simulations have demonstrated promising performance of the proposed algorithm for the blind equalization of a three-ray multipath channel. >"
]
} |
1610.06098 | 2950332050 | This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By "diverse inputs", we mean that the @math 's belong to different, generic subspaces. To our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed, but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients, and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math 's over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math . | It is also worth mentioning here a related line of research in the phase recovery problem from phaseless measurements @cite_12 @cite_1 , which happen to be quadratic in the unknowns. As in bilinear problems, it is also possible to lift the quadratic phase recovery problem to a higher dimensional space, and solve for a positive-definite matrix with minimal rank that satisfies the measurement constraints. | {
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"1558303534",
"2078397124"
],
"abstract": [
"This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to...",
"Suppose we wish to recover a signal amssym @math from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc."
]
} |
1610.05854 | 2536967496 | Semantic segmentation is challenging as it requires both object-level information and pixel-level accuracy. Recently, FCN-based systems gained great improvement in this area. Unlike classification networks, combining features of different layers plays an important role in these dense prediction models, as these features contains information of different levels. A number of models have been proposed to show how to use these features. However, what is the best architecture to make use of features of different layers is still a question. In this paper, we propose a module, called mixed context network, and show that our presented system outperforms most existing semantic segmentation systems by making use of this module. | Many systems have been proposed to handling scale variability in semantic segmentation. One of the most common approaches is extracting score maps from multiple rescaled versions of the original image by making use of parallel CNN branches @cite_12 . @cite_13 uses skip connections to combine the predictions of fine layers and coarse layers. @cite_1 gains improvements on detection and segmentation tasks by Hypercolumns representation which could be generated by skip connections. @cite_5 attacks this problem by using atrous spatial pyramid pooling (ASPP), which is constructed by multiple parallel atrous convolutional layers with different sampling rates. @cite_10 studies the influence of both long and short skip connections, and finds that both of them are helpful to FCN-based method. | {
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2517954747",
"1948751323",
"2412782625",
"2952632681",
"2158865742"
],
"abstract": [
"In this paper, we study the influence of both long and short skip connections on Fully Convolutional Networks (FCN) for biomedical image segmentation. In standard FCNs, only long skip connections are used to skip features from the contracting path to the expanding path in order to recover spatial information lost during downsampling. We extend FCNs by adding short skip connections, that are similar to the ones introduced in residual networks, in order to build very deep FCNs (of hundreds of layers). A review of the gradient flow confirms that for a very deep FCN it is beneficial to have both long and short skip connections. Finally, we show that a very deep FCN can achieve near-to-state-of-the-art results on the EM dataset without any further post-processing.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014."
]
} |
1610.05465 | 2952366876 | The automatic analysis of the surgical process, from videos recorded during surgeries, could be very useful to surgeons, both for training and for acquiring new techniques. The training process could be optimized by automatically providing some targeted recommendations or warnings, similar to the expert surgeon's guidance. In this paper, we propose to reuse videos recorded and stored during cataract surgeries to perform the analysis. The proposed system allows to automatically recognize, in real time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical tools, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the Content Based Video retrieval (CBVR) paradigm. Different data-driven statistical models are evaluated in this paper. For this project, a dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of area under the ROC curve. Promising results were obtained using either the presence of surgical tools ( @math = 0.983) or motion analysis ( @math = 0.759). The generality of the method allows to adapt it to any kinds of surgeries. The proposed solution could be used in a computer assisted surgery tool to support surgeons during the surgery. | In terms of methodology, several methods reuse data recorded during a video-monitored surgery for the automated analysis of surgical processes @cite_20 @cite_8 . In particular, some of these methods rely on Content-based video retrieval (CBVR), whose goal is to find similar videos or sub-videos inside a dataset @cite_9 @cite_16 @cite_6 @cite_24 . But those methods do not model the temporal sequencing of the surgical process. Different kinds of models were used to model this process, like Dynamic Time Warping (DTW) averaging @cite_1 @cite_20 , which builds an average surgery. But this method does not allow on-line computations because it requires the entire video to be known (past, present, but also future information). On the other hand, Hidden Markov Models (HMMs) @cite_1 or their derivatives, like Conditional Random Fields (CRFs), do allow on-line computations. CRFs seems to provide better results than the HMMs in the context of automatic surgical video analysis @cite_7 @cite_21 . A Hierarchical Hidden Markov Model (HHMM), a hierarchical generalization of HMMs, was also used by @cite_23 to perform a phase recognition taking into account inter-phase and intra-phase dependencies. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_16",
"@cite_20"
],
"mid": [
"2064767749",
"",
"2041179244",
"593558681",
"2107466310",
"1998162403",
"2408936021",
"2951144181",
"2018421200",
""
],
"abstract": [
"In ophthalmology, it is now common practice to record every surgical procedure and to archive the resulting videos for documentation purposes. In this paper, we present a solution to automatically segment and categorize surgical tasks in real-time during the surgery, using the video recording. The goal would be to communicate information to the surgeon in due time, such as recommendations to the less experienced surgeons. The proposed solution relies on the content-based video retrieval paradigm: it reuses previously archived videos to automatically analyze the current surgery, by analogy reasoning. Each video is segmented, in real-time, into an alternating sequence of idle phases, during which no clinically-relevant motions are visible, and action phases. As soon as an idle phase is detected, the previous action phase is categorized and the next action phase is predicted. A conditional random field is used for categorization and prediction. The proposed system was applied to the automatic segmentation and categorization of cataract surgery tasks. A dataset of 186 surgeries, performed by ten different surgeons, was manually annotated: ten possibly overlapping surgical tasks were delimited in each surgery. Using the content of action phases and the duration of idle phases as sources of evidence, an average recognition performance of @math was achieved.",
"",
"This paper describes a practical and reliable solution approach to achieve automated retrieval of surgical instruments used in laparoscopic surgery. The central goal is to locate particular video frames containing intended information which can be used for analysis and diagnosis. In this paper, a practical system is proposed where the users need not manually search the candidate frames in the entire video. Instead, users can give any query object (in image format) and the frames containing the object will be retrieved. Given an object image, the method extracts features like color and shape of objects in each frame of the laparoscopic video and compare with the input image feature to retrieve the frames containing the desired instrument. The system can recognize the instrument in 91 cases but does not give any false alarm. Experimental results are presented to show the feasibility of the proposed application.",
"Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot’s kinematic data. Although recent work [1,2] shows that the robot’s video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.",
"In this paper, we contribute to the development of context-aware operating rooms by introducing a novel approach to modeling and monitoring the workflow of surgical interventions. We first propose a new representation of interventions in terms of multidimensional time-series formed by synchronized signals acquired over time. We then introduce methods based on Dynamic Time Warping and Hidden Markov Models to analyze and process this data. This results in workflow models combining low-level signals with high-level information such as predefined phases, which can be used to detect actions and trigger an event. Two methods are presented to train these models, using either fully or partially labeled training surgeries. Results are given based on tool usage recordings from sixteen laparoscopic cholecystectomies performed by several surgeons.",
"This paper introduces a new algorithm for recognizing surgical tasks in real-time in a video stream. The goal is to communicate information to the surgeon in due time during a video-monitored surgery. The proposed algorithm is applied to cataract surgery, which is the most common eye surgery. To compensate for eye motion and zoom level variations, cataract surgery videos are first normalized. Then, the motion content of short video subsequences is characterized with spatiotemporal polynomials: a multiscale motion characterization based on adaptive spatiotemporal polynomials is presented. The proposed solution is particularly suited to characterize deformable moving objects with fuzzy borders, which are typically found in surgical videos. Given a target surgical task, the system is trained to identify which spatiotemporal polynomials are usually extracted from videos when and only when this task is being performed. These key spatiotemporal polynomials are then searched in new videos to recognize the target surgical task. For improved performances, the system jointly adapts the spatiotemporal polynomial basis and identifies the key spatiotemporal polynomials using the multiple-instance learning paradigm. The proposed system runs in real-time and outperforms the previous solution from our group, both for surgical task recognition ( @math on average, as opposed to @math previously) and for the joint segmentation and recognition of surgical tasks ( @math on average, as opposed to @math previously).",
"Purpose Over the last decade, the demand for content management of video recordings of surgical procedures has greatly increased. Although a few research methods have been published toward this direction, the related literature is still in its infancy. In this paper, we address the problem of shot detection in endoscopic surgery videos, a fundamental step in content-based video analysis.",
"Surgical workflow recognition has numerous potential medical applications, such as the automatic indexing of surgical video databases and the optimization of real-time operating room scheduling, among others. As a result, phase recognition has been studied in the context of several kinds of surgeries, such as cataract, neurological, and laparoscopic surgeries. In the literature, two types of features are typically used to perform this task: visual features and tool usage signals. However, the visual features used are mostly handcrafted. Furthermore, the tool usage signals are usually collected via a manual annotation process or by using additional equipment. In this paper, we propose a novel method for phase recognition that uses a convolutional neural network (CNN) to automatically learn features from cholecystectomy videos and that relies uniquely on visual information. In previous studies, it has been shown that the tool signals can provide valuable information in performing the phase recognition task. Thus, we present a novel CNN architecture, called EndoNet, that is designed to carry out the phase recognition and tool presence detection tasks in a multi-task manner. To the best of our knowledge, this is the first work proposing to use a CNN for multiple recognition tasks on laparoscopic videos. Extensive experimental comparisons to other methods show that EndoNet yields state-of-the-art results for both tasks.",
"Nowadays, many surgeries, including eye surgeries, are video-monitored. We present in this paper an automatic video analysis system able to recognize surgical tasks in real-time. The proposed system relies on the Content-Based Video Retrieval (CBVR) paradigm. It characterizes short subsequences in the video stream and searches for video subsequences with similar structures in a video archive. Fixed-length feature vectors are built for each subsequence: the feature vectors are unchanged by variations in duration and temporal structure among the target surgical tasks. Therefore, it is possible to perform fast nearest neighbor searches in the video archive. The retrieved video subsequences are used to recognize the current surgical task by analogy reasoning. The system can be trained to recognize any surgical task using weak annotations only. It was applied to a dataset of 23 epiretinal membrane surgeries and a dataset of 100 cataract surgeries. Three surgical tasks were annotated in the first dataset. Nine surgical tasks were annotated in the second dataset. To assess its generality, the system was also applied to a dataset of 1,707 movie clips in which 12 human actions were annotated. High task recognition scores were measured in all three datasets. Real-time task recognition will be used in future works to communicate with surgeons (trainees in particular) or with surgical devices.",
""
]
} |
1610.05465 | 2952366876 | The automatic analysis of the surgical process, from videos recorded during surgeries, could be very useful to surgeons, both for training and for acquiring new techniques. The training process could be optimized by automatically providing some targeted recommendations or warnings, similar to the expert surgeon's guidance. In this paper, we propose to reuse videos recorded and stored during cataract surgeries to perform the analysis. The proposed system allows to automatically recognize, in real time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical tools, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the Content Based Video retrieval (CBVR) paradigm. Different data-driven statistical models are evaluated in this paper. For this project, a dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of area under the ROC curve. Promising results were obtained using either the presence of surgical tools ( @math = 0.983) or motion analysis ( @math = 0.759). The generality of the method allows to adapt it to any kinds of surgeries. The proposed solution could be used in a computer assisted surgery tool to support surgeons during the surgery. | The method presented in this paper extends a previous solution from our group, presented at a conference @cite_25 . That system performs an on-line analysis of a cataract surgery video at two different levels of description. It uses high-level phase recognition to help low-level step recognition, but it also uses information from step recognition to refine the recognition of phases. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2468278636"
],
"abstract": [
"Data recorded and stored during video-monitored surgeries are a relevant source of information for surgeons, especially during their training period. But today, this data is virtually unexploited. In this paper, we propose to reuse videos recorded during cataract surgeries to automatically analyze the surgical process with the real-time constraint, with the aim to assist the surgeon during the surgery. We propose to automatically recognize, in real-time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical instruments, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the CBVR paradigm. A dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of mean area under the ROC curve. Promising results were obtained using either motion analysis (Az = 0.759) or the presence of surgical instruments (Az = 0.983)."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Attacks on statistical and machine learning models. In @cite_8 , knowledge of the parameters of SVM and HMM models is used to infer general statistical information about the training dataset, for example, whether records of a particular race were used during training. By contrast, our inference attacks work in a black-box setting, without any knowledge of the model's parameters, and infer information about in the training dataset, as opposed to general statistics. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2962835266"
],
"abstract": [
"Machine-learning ML enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Other attacks on machine learning include @cite_24 , where the adversary exploits in the outputs of a collaborative recommender system to infer inputs that caused these changes. These attacks exploit temporal behavior specific to the recommender systems based on collaborative filtering. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2095272373"
],
"abstract": [
"David Craig and colleagues recently reported methods allowing detection of individual genotypes from summary data of high-density SNP arrays. Eran Halperin and colleagues now report analyses of the statistical power of these methods, employing likelihood ratio statistics to provide an upper-bound to the limits of detection."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | In general, model inversion cannot tell whether a particular record was used as part of the model's training dataset. Given a record and a model, model inversion works exactly the same way when the record was used to train the model and when it was not used. In the case of pharmacogenetics @cite_23 , model inversion produces almost identical results for members and non-members. Due to the overfitting of the model, the results are a little (4 the members, but this accuracy can only be measured in retrospect, if the adversary already knows the ground truth (i.e., which records are indeed members of the model's training dataset). By contrast, our goal is to construct a decision procedure that distinguishes members from non-members. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1473189865"
],
"abstract": [
"We initiate the study of privacy in pharmacogenetics, wherein machine learning models are used to guide medical treatments based on a patient's genotype and background. Performing an in-depth case study on privacy in personalized warfarin dosing, we show that suggested models carry privacy risks, in particular because attackers can perform what we call model inversion: an attacker, given the model and some demographic information about a patient, can predict the patient's genetic markers. As differential privacy (DP) is an oft-proposed solution for medical settings such as this, we evaluate its effectiveness for building private versions of pharmacogenetic models. We show that DP mechanisms prevent our model inversion attacks when the privacy budget is carefully selected. We go on to analyze the impact on utility by performing simulated clinical trials with DP dosing models. We find that for privacy budgets effective at preventing attacks, patients would be exposed to increased risk of stroke, bleeding events, and mortality. We conclude that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by our work."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Model extraction. Model extraction attacks @cite_6 aim to extract the parameters of a model trained on private data. The attacker's goal is to construct a model whose predictive performance on validation data is similar to the target model. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2461943168"
],
"abstract": [
"Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (\"predictive analytics\") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., \"steal\") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Model extraction can be a stepping stone for inferring information about the model's training dataset. @cite_6 , this is illustrated for a specific type of models called kernel logistic regression (KLR) @cite_27 . In KLR models, the kernel function includes a tiny fraction of the training data (so called import points'') directly into the model. Since import points are parameters of the model, extracting them results in the leakage of that particular part of the data. This result is very specific to KLR and does not extend to other types of models since they do not explicitly store training data in their parameters. | {
"cite_N": [
"@cite_27",
"@cite_6"
],
"mid": [
"2043182541",
"2461943168"
],
"abstract": [
"The support vector machine (SVM) is known for its good performance in two-class classification, but its extension to multiclass classification is still an ongoing research issue. In this article, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in two-class classification, but also can naturally be generalized to the multiclass case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the support points of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a potential computational advantage over the SVM.",
"Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (\"predictive analytics\") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., \"steal\") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Privacy-preserving machine learning. Existing literature on privacy protection in machine learning focuses mostly on how to learn without direct access to the training data. Secure multiparty computation (SMC) has been used for learning decision trees @cite_13 , linear regression functions @cite_20 , Naive Bayes classifiers @cite_29 , and k-means clustering @cite_14 . The goal is to limit information leakage during training. The training algorithm is the same as in the non-privacy-preserving case, thus the resulting models are as vulnerable to inference attacks as any conventionally trained model. This also holds for the models trained by computing on encrypted data @cite_28 @cite_30 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_28",
"@cite_29",
"@cite_5",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"2009733253",
"2077905990",
"",
"1826232489",
"2612279584",
"2112380340"
],
"abstract": [
"",
"Advances in computer networking and database technologies have enabled the collection and storage of vast quantities of data. Data mining can extract valuable knowledge from this data, and organizations have realized that they can often obtain better results by pooling their data together. However, the collected data may contain sensitive or private information about the organizations or their customers, and privacy concerns are exacerbated if data is shared between multiple organizations.Distributed data mining is concerned with the computation of models from data that is distributed among multiple participants. Privacy-preserving distributed data mining seeks to allow for the cooperative computation of such models without the cooperating parties revealing any of their individual data items. Our paper makes two contributions in privacy-preserving data mining. First, we introduce the concept of arbitrarily partitioned data, which is a generalization of both horizontally and vertically partitioned data. Second, we provide an efficient privacy-preserving protocol for k-means clustering in the setting of arbitrarily partitioned data.",
"Abstract Increasingly, confidential medical records are being stored in data centers hosted by hospitals or large companies. As sophisticated algorithms for predictive analysis on medical data continue to be developed, it is likely that, in the future, more and more computation will be done on private patient data. While encryption provides a tool for assuring the privacy of medical information, it limits the functionality for operating on such data. Conventional encryption methods used today provide only very restricted possibilities or none at all to operate on encrypted data without decrypting it first. Homomorphic encryption provides a tool for handling such computations on encrypted data, without decrypting the data, and without even needing the decryption key. In this paper, we discuss possible application scenarios for homomorphic encryption in order to ensure privacy of sensitive medical data. We describe how to privately conduct predictive analysis tasks on encrypted data using homomorphic encryption. As a proof of concept, we present a working implementation of a prediction service running in the cloud (hosted on Microsoft’s Windows Azure), which takes as input private encrypted health data, and returns the probability for suffering cardiovascular disease in encrypted form. Since the cloud service uses homomorphic encryption, it makes this prediction while handling only encrypted data, learning nothing about the submitted confidential medical data.",
"",
"The problem we address is the following: how can a user employ a predictive model that is held by a third party, without compromising private information. For example, a hospital may wish to use a cloud service to predict the readmission risk of a patient. However, due to regulations, the patient's medical files cannot be revealed. The goal is to make an inference using the model, without jeopardizing the accuracy of the prediction or the privacy of the data. To achieve high accuracy, we use neural networks, which have been shown to outperform other learning models for many tasks. To achieve the privacy requirements, we use homomorphic encryption in the following protocol: the data owner encrypts the data and sends the ciphertexts to the third party to obtain a prediction from a trained model. The model operates on these ciphertexts and sends back the encrypted prediction. In this protocol, not only the data remains private, even the values predicted are available only to the data owner. Using homomorphic encryption and modifications to the activation functions and training algorithms of neural networks, we show that it is protocol is possible and may be feasible. This method paves the way to build a secure cloud-based neural network prediction services without invading users' privacy.",
"In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.",
"This paper addresses the important tradeoff between privacy and learnability, when designing algorithms for learning from private databases. We focus on privacy-preserving logistic regression. First we apply an idea of [6] to design a privacy-preserving logistic regression algorithm. This involves bounding the sensitivity of regularized logistic regression, and perturbing the learned classifier with noise proportional to the sensitivity. We then provide a privacy-preserving regularized logistic regression algorithm based on a new privacy-preserving technique: solving a perturbed optimization problem. We prove that our algorithm preserves privacy in the model due to [6]. We provide learning guarantees for both algorithms, which are tighter for our new algorithm, in cases in which one would typically apply logistic regression. Experiments demonstrate improved learning performance of our method, versus the sensitivity method. Our privacy-preserving technique does not depend on the sensitivity of the function, and extends easily to a class of convex loss functions. Our work also reveals an interesting connection between regularization and privacy."
]
} |
1610.05820 | 2949777041 | We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies. | Differential privacy @cite_26 has been applied to linear and logistic regression @cite_15 @cite_37 , support vector machines @cite_35 , risk minimization @cite_3 @cite_17 @cite_32 , deep learning @cite_4 @cite_31 , learning an unknown probability distribution over a discrete population from random samples @cite_18 , and releasing hyper-parameters and classifier accuracy @cite_21 . By definition, differentially private models limit the success probability of membership inference attacks based solely on the model, which includes the attacks described in this paper. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_31",
"@cite_18",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_15",
"@cite_17"
],
"mid": [
"2951114885",
"",
"",
"2053637704",
"",
"2186645727",
"2949274245",
"",
"2119874464",
"2162379889",
""
],
"abstract": [
"We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure.",
"",
"",
"Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.",
"",
"We investigate the problem of learning an unknown probability distribution over a discrete population from random samples. Our goal is to design efficient algorithms that simultaneously achieve low error in total variation norm while guaranteeing Differential Privacy to the individuals of the population. We describe a general approach that yields near sample-optimal and computationally efficient differentially private estimators for a wide range of well-studied and natural distribution families. Our theoretical results show that for a wide variety of structured distributions there exist private estimation algorithms that are nearly as efficient—both in terms of sample size and running time—as their non-private counterparts. We complement our theoretical guarantees with an experimental evaluation. Our experiments illustrate the speed and accuracy of our private estimators on both synthetic mixture models and a large public data set.",
"Bayesian optimization is a powerful tool for fine-tuning the hyper-parameters of a wide variety of machine learning models. The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems. As machine learning becomes commonplace, Bayesian optimization becomes an attractive method for practitioners to automate the process of classifier hyper-parameter tuning. A key observation is that the data used for tuning models in these settings is often sensitive. Certain data such as genetic predisposition, personal email statistics, and car accident history, if not properly private, may be at risk of being inferred from Bayesian optimization outputs. To address this, we introduce methods for releasing the best hyper-parameters and classifier accuracy privately. Leveraging the strong theoretical guarantees of differential privacy and known Bayesian optimization convergence bounds, we prove that under a GP assumption these private quantities are also near-optimal. Finally, even if this assumption is not satisfied, we can use different smoothness guarantees to protect privacy.",
"",
"Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.",
"e-differential privacy is the state-of-the-art model for releasing sensitive information while protecting privacy. Numerous methods have been proposed to enforce e-differential privacy in various analytical tasks, e.g., regression analysis. Existing solutions for regression analysis, however, are either limited to non-standard types of regression or unable to produce accurate regression results. Motivated by this, we propose the Functional Mechanism, a differentially private method designed for a large class of optimization-based analyses. The main idea is to enforce e-differential privacy by perturbing the objective function of the optimization problem, rather than its results. As case studies, we apply the functional mechanism to address two most widely used regression models, namely, linear regression and logistic regression. Both theoretical analysis and thorough experimental evaluations show that the functional mechanism is highly effective and efficient, and it significantly outperforms existing solutions.",
""
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | Clustering techniques are a set of methods to group data points that are similar together @cite_15 . These characteristics of the groups are defined by a pattern of values in their variables. Clustering is an unsupervised learning method. It does not require a training data set to create a model. The groups in which the data points are to be classified need not be known at the start. Hence, clustering can be used for exploratory data analysis to identify patterns in the data. Clustering is a three stage process Extract features from the given set of points. Perform similarity measurement between data points. Create groups based on the similarity measurement. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1992419399"
],
"abstract": [
"Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | Clustering techniques are of different types and mainly divided into 2 categories:- - These techniques create groups of points which are similar to each other. Once a group is formed, it creates the next level by combining groups that are similar. In this way, a hierarchy of groups is created with all groups merged at the top most level of the hierarchy. The structure is called a dendogram @cite_15 . - These techniques try to create a single partition in the dataset as compared to a dendogram which may have a high computation time. The problem occurs when the size of the dataset is large. Partitional techniques try to optimize a certain function based on which the partition is made. Calculating the optimal set of values for the function could again be computationally expensive. Hence an approximation is calculated by executing the algorithm multiple times on the same dataset until the function reaches a state that is to optimal. For example, using squared error a as function to create partitions @cite_15 . The algorithm is executed until the squared error is reduced to a value that is below a certain pre-determined threshold. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1992419399"
],
"abstract": [
"Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | The k-means clustering algorithm is a widely used algorithm @cite_4 . This is a centroid or partition based clustering technique. The algorithm clusters all the data points into clusters. The algorithm starts by selecting an arbitrary set of centroids @math . It then assigns each point to the closest centroid @math . Once the points are clustered, it calculates the center of mass for each cluster to get a new set of centroids. The previous steps are then repeated for the new centroids. After each iteration the set of centroids moves closer to the final set such that the next iteration does not change the set of centroids chosen. This means the center of mass for the clusters calculated remains constant. The algorithm stops computing after this point. The worst case time complexity is @math @cite_4 where is the number of data points, is the number of clusters and the points are in a -dimensional space. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2132958733"
],
"abstract": [
"The k-means algorithm is a well-known method for partitioning n points that lie in the d-dimensional space into k clusters. Its main features are simplicity and speed in practice. Theoretically, however, the best known upper bound on its running time (i.e. O(nkd)) is, in general, exponential in the number of points (when kd=Ω(n log n)). Recently, Arthur and Vassilvitskii [2] showed a super-polynomial worst-case analysis, improving the best known lower bound from Ω(n) to 2Ω(√n) with a construction in d=Ω(√n) dimensions. In [2] they also conjectured the existence of super-polynomial lower bounds for any d≥ 2. Our contribution is twofold: we prove this conjecture and we improve the lower bound, by presenting a simple construction in the plane that leads to the exponential lower bound 2Ω(n)."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | [Nearest Neighbor Clustering] Nearest Neighbor Clustering This is a hierarchial clustering technique. In this clustering method, the nearest neighbor to each data point is found and the point is assigned to that cluster. A Voronoi decomposition of the data points is performed @cite_9 . There is a threshold or quality function @math to put a threshold on the distance that is considered between the point and the cluster. Thus all the points are put into clusters where is user-defined. The clustering is implemented using a graph based structure. Whenever a point closest to the current point is found, an edge is created between them thus linking them in the same cluster @cite_15 . It is also called agglomerative single-link clustering technique and has a time complexity of @math @cite_14 . | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_15"
],
"mid": [
"2171813245",
"1532325895",
"1992419399"
],
"abstract": [
"Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not achieve this goal, and instead can lead to inconsistency. We construct examples which provably have this behavior. As in the case of supervised learning, the cure is to restrict the size of the function classes under consideration. For appropriate \"small\" function classes we can prove very general consistency theorems for clustering optimization schemes. As one particular algorithm for clustering with a restricted function space we introduce \"nearest neighbor clustering\". Similar to the k-nearest neighbor classifier in supervised learning, this algorithm can be seen as a general baseline algorithm to minimize arbitrary clustering objective functions. We prove that it is statistically consistent for all commonly used clustering objective functions.",
"Class-tested and coherent, this groundbreaking new textbook teaches web-era information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. Written from a computer science perspective by three leading experts in the field, it gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Although originally designed as the primary text for a graduate or advanced undergraduate course in information retrieval, the book will also create a buzz for researchers and professionals alike.",
"Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | Consider a data set where each point has a large number of variables. These variables may have different scales of values, and different densities and variances. There are a number of possible problems with high dimensional data @cite_11 : Processing high dimensional data (especially when the number of data points is large) is expensive. Even though the number of dimensions is high, the data could be classified or clustered using a smaller subset of variables. As the number of dimensions increases, the values for some variables may become sparse. This is known as the @cite_11 . The states that the number of sample points required to approximate a function increases exponentially as the number of variables dimensions increases. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1602102683"
],
"abstract": [
"The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in high-dimensional spaces and as a modelling tool for such data. It is defined as the search for a low-dimensional manifold that embeds the high-dimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | A dimension reduction technique is a transformation which reduces number of dimensions required to represent a sample. The reduced set of dimensions may be a subset of the original set of dimensions (for example, using information gain) or could be a completely new set of dimensions. Some of standard dimension reduction techniques that can be used to transform a high dimensional data set are Principal Component Analysis (PCA), and Self Organizing Maps (SOM) @cite_11 . Neural Networks with GIS have also been used for constructing a which tries to forecast how usage changes @cite_5 . Self Organizing Maps have been used to perform environmental assessment of regions, grouping based on environmental conditions, and finding out which areas might deteriorate in the future @cite_0 . Once a SOM is trained, the nodes from the weight vector can be used as centroids representing their respective clusters. As the nodes may not be actual data points, the point closest to each node will be used as a representative point. Training a SOM may require updating the weight vector over several iterations of the data set. The time complexity of a SOM is @math @cite_12 . | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"2092763973",
"2135822449",
"2165948731",
"1602102683"
],
"abstract": [
"A new method has been developed to perform environmental assessment at regional scale. This involves a combination of a self-organizing map (SOM) neural network and principal component analysis (PCA). The method is capable of clustering ecosystems in terms of environmental conditions and suggesting relative cumulative environmental impacts of multiple factors across a large region. Using data on land-cover, population, roads, streams, air pollution, and topography of the Mid-Atlantic region, the method was able to indicate areas that are in relatively poor environmental condition or vulnerable to future deterioration. Combining the strengths of SOM with those of PCA, the method offers an easy and useful way to perform a regional environmental assessment. Compared with traditional clustering and ranking approaches, the described method has considerable advantages, such as providing a valuable means for visualizing complex multidimensional environmental data at multiple scales and offering a single assessment or ranking needed for a regional environmental assessment while still facilitating the opportunity for more detailed analyses.",
"The Land Transformation Model (LTM), which couples geographic information systems (GIS) with artificial neural networks (ANNs) to forecast land use changes, is presented here. A variety of social, political, and environmental factors contribute to the model’s predictor variables of land use change. This paper presents a version of the LTMparameterized for Michigan’s Grand Traverse Bay Watershed and explores how factors such as roads, highways, residential streets, rivers, Great Lakes coastlines, recreational facilities, inland lakes, agricultural density, and quality of views can influence urbanization patterns in this coastal watershed. ANNs are used to learn the patterns of development in the region and test the predictive capacity of the model, while GIS is used to develop the spatial, predictor drivers and perform spatial analysis on the results. The predictive ability of the model improved at larger scales when assessed using a moving scalable window metric. Finally, the individual contribution of each predictor variable was examined and shown to vary across spatial scales. At the smallest scales, quality views were the strongest predictor variable. We interpreted the multi-scale influences of land use change, illustrating the relative influences of site (e.g. quality of views, residential streets) and situation (e.g. highways and county roads) variables at different scales. # 2002 Elsevier Science Ltd. All rights reserved.",
"Kohonen's self-organizing map (SOM) is a popular neural network architecture for solving problems in the field of explorative data analysis, clustering, and data visualization. One of the major drawbacks of the SOM algorithm is the difficulty for nonexpert users to interpret the information contained in a trained SOM. In this paper, this problem is addressed by introducing an enhanced version of the Clusot algorithm. This algorithm consists of two main steps: 1) the computation of the Clusot surface utilizing the information contained in a trained SOM and 2) the automatic detection of clusters in this surface. In the Clusot surface, clusters present in the underlying SOM are indicated by the local maxima of the surface. For SOMs with 2-D topology, the Clusot surface can, therefore, be considered as a convenient visualization technique. Yet, the presented approach is not restricted to a certain type of 2-D SOM topology and it is also applicable for SOMs having an n-dimensional grid topology.",
"The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in high-dimensional spaces and as a modelling tool for such data. It is defined as the search for a low-dimensional manifold that embeds the high-dimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text."
]
} |
1610.05819 | 2534446092 | The idea of representation has been used in various fields of study from data analysis to political science. In this paper, we define representativeness and describe a method to isolate data points that can represent the entire data set. Also, we show how the minimum set of representative data points can be generated. We use data from GLOBE (a project to study the effects on Land Change based on a set of parameters that include temperature, forest cover, human population, atmospheric parameters and many other variables) to test & validate the algorithm. Principal Component Analysis (PCA) is used to reduce the dimensions of the multivariate data set, so that the representative points can be generated efficiently and its Representativeness has been compared against Random Sampling of points from the data set. | Hoffman et. al. @cite_6 use to calculate representativeness of sampling networks. MSTC can be performed using a combination of PCA and K-Means clustering @cite_6 . The data set considered is high dimensional and is assumed to contain a lot of redundant information. Hence the method involves reducing the number of dimensions using PCA at the beginning and then performing standard k-means clustering. Hoffman et. al. also provide a set of improvements for performing PCA and k-means clustering. The time required to perform k-means clustering is reduced by decreasing the number of distance computations between the centroid and the other points, based on cluster created and new distances computed. The time complexity of PCA computation is reduced by parallelizing it. The summation of all euclidean distances from points to their nearest sample locations or centroids, is used to measure representativeness of the sample set. Higher the sum, lower is the representativeness of the sample set. | {
"cite_N": [
"@cite_6"
],
"mid": [
"204960955"
],
"abstract": [
"The authors have applied multivariate cluster analysis to a variety of environmental science domains, including ecological regionalization; environmental monitoring network design; analysis of satellite-, airborne-, and ground-based remote sensing, and climate model-model and model-measurement intercomparison. The clustering methodology employs a k-means statistical clustering algorithm that has been implemented in a highly scalable, parallel high performance computing (HPC) application. Because of its efficiency and use of HPC platforms, the clustering code may be applied as a data mining tool to analyze and compare very large data sets of high dimensionality, such as very long or high frequency resolution time series measurements or model output. The method was originally applied across geographic space and called Multivariate Ge- ographic Clustering (MGC). Now applied across space and through time, the environmental data mining method is called Multivariate Spatio-Temporal Clustering (MSTC). Described here are the clustering algorithm, recent code improvements that significantly reduce the time-to-solution, and a new parallel principal components analysis (PCA) tool that can analyze very large data sets. Finally, a sampling of the authors' applications of MGC and MSTC to problems in the environ- mental sciences are presented."
]
} |
1610.05883 | 2950625307 | Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. The tool and dataset are available at www.scenenn.net. | A common approach for scene segmentation is to perform the segmentation on RGB-D images and use object classifiers for labeling the segmentation results. Examples of this approach can be found in @cite_20 , @cite_1 . The spatial relationships between objects can also be exploited to infer the scene labels. For example, @cite_35 used object layout rules for scene labeling. The spatial relationship between objects was modeled by a conditional random field (CRF) in @cite_25 @cite_43 and directed graph in @cite_27 . | {
"cite_N": [
"@cite_35",
"@cite_1",
"@cite_43",
"@cite_27",
"@cite_25",
"@cite_20"
],
"mid": [
"2065476635",
"2067912884",
"",
"2166012320",
"2152571752",
"2066813062"
],
"abstract": [
"3D volumetric reasoning is important for truly understanding a scene. Humans are able to both segment each object in an image, and perceive a rich 3D interpretation of the scene, e.g., the space an object occupies, which objects support other objects, and which objects would, if moved, cause other objects to fall. We propose a new approach for parsing RGB-D images using 3D block units for volumetric reasoning. The algorithm fits image segments with 3D blocks, and iteratively evaluates the scene based on block interaction properties. We produce a 3D representation of the scene based on jointly optimizing over segmentations, block fitting, supporting relations, and object stability. Our algorithm incorporates the intuition that a good 3D representation of the scene is the one that fits the data well, and is a stable, self-supporting (i.e., one that does not topple) arrangement of objects. We experiment on several datasets including controlled and real indoor scenarios. Results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.",
"We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.",
"",
"RGBD images with high quality annotations in the form of geometric (i.e., segmentation) and structural (i.e., how do the segments are mutually related in 3D) information provide valuable priors to a large number of scene and image manipulation applications. While it is now simple to acquire RGBD images, annotating them, automatically or manually, remains challenging especially in cluttered noisy environments. We present SmartAnnotator, an interactive system to facilitate annotating RGBD images. The system performs the tedious tasks of grouping pixels, creating potential abstracted cuboids, inferring object interactions in 3D, and comes up with various hypotheses. The user simply has to flip through a list of suggestions for segment labels, finalize a selection, and the system updates the remaining hypotheses. As objects are finalized, the process speeds up with fewer ambiguities to resolve. Further, as more scenes are annotated, the system makes better suggestions based on structural and geometric priors learns from the previous annotation sessions. We test our system on a large number of database scenes and report significant improvements over naive low-level annotation tools.",
"In this paper, we tackle the problem of indoor scene understanding using RGBD data. Towards this goal, we propose a holistic approach that exploits 2D segmentation, 3D geometry, as well as contextual relations between scenes and objects. Specifically, we extend the CPMC [3] framework to 3D in order to generate candidate cuboids, and develop a conditional random field to integrate information from different sources to classify the cuboids. With this formulation, scene classification and 3D object recognition are coupled and can be jointly solved through probabilistic inference. We test the effectiveness of our approach on the challenging NYU v2 dataset. The experimental results demonstrate that through effective evidence integration and holistic reasoning, our approach achieves substantial improvement over the state-of-the-art.",
"Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6 to 76.1 . We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4 to 82.9 ."
]
} |
1610.05883 | 2950625307 | Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. The tool and dataset are available at www.scenenn.net. | Compared with 2D labels, 3D labels are often desired as they provide a more comprehensive understanding of the real world. 3D labels can be propagated by back-projecting 2D labels from image domain to 3D space. For example, @cite_19 used the labels provided in the ImageNet @cite_0 to infer 3D labels. In @cite_6 , 2D labels were obtained by drawing polygons. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_6"
],
"mid": [
"2108598243",
"2149542945",
"1985238052"
],
"abstract": [
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Recent years have witnessed a growing interest in understanding the semantics of point clouds in a wide variety of applications. However, point cloud labeling remains an open problem, due to the difficulty in acquiring sufficient 3D point labels towards training effective classifiers. In this paper, we overcome this challenge by utilizing the existing massive 2D semantic labeled datasets from decade-long community efforts, such as Image Net and Label Me, and a novel cross-domain'' label propagation approach. Our proposed method consists of two major novel components, Exemplar SVM based label propagation, which effectively addresses the cross-domain issue, and a graphical model based contextual refinement incorporating 3D constraints. Most importantly, the entire process does not require any training data from the target scenes, also with good scalability towards large scale applications. We evaluate our approach on the well-known Cornell Point Cloud Dataset, achieving much greater efficiency and comparable accuracy even without any 3D training data. Our approach shows further major gains in accuracy when the training data from the target scenes is used, outperforming state-of-the-art approaches with far better efficiency.",
"Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http: sun3d.cs.princeton.edu."
]
} |
1610.05883 | 2950625307 | Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. The tool and dataset are available at www.scenenn.net. | Labeling directly on images is time consuming. Typically, a few thousands of images need to be handled. It is possible to perform matching among the images to propagate the annotations from one image to another, e.g. @cite_6 , but this process is less reliable. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1985238052"
],
"abstract": [
"Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http: sun3d.cs.princeton.edu."
]
} |
1610.05883 | 2950625307 | Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. The tool and dataset are available at www.scenenn.net. | 3D object templates can be used to segment 3D scenes. The templates can be organized in holistic models, e.g., @cite_39 , @cite_5 , @cite_30 , @cite_41 , or part-based models, e.g. @cite_22 . The segmentation can be performed on 3D point clouds, e.g. @cite_39 , @cite_30 , @cite_22 , or 3D patches, e.g. @cite_41 , @cite_5 , @cite_23 . | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_41",
"@cite_39",
"@cite_23",
"@cite_5"
],
"mid": [
"2097374608",
"2077263423",
"2049351243",
"1990345222",
"2178922201",
"2097696373"
],
"abstract": [
"We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.",
"We present a novel solution to automatic semantic modeling of indoor scenes from a sparse set of low-quality RGB-D images. Such data presents challenges due to noise, low resolution, occlusion and missing depth information. We exploit the knowledge in a scene database containing 100s of indoor scenes with over 10,000 manually segmented and labeled mesh models of objects. In seconds, we output a visually plausible 3D scene, adapting these models and their parts to fit the input scans. Contextual relationships learned from the database are used to constrain reconstruction, ensuring semantic compatibility between both object models and parts. Small objects and objects with incomplete depth information which are difficult to recover reliably are processed with a two-stage approach. Major objects are recognized first, providing a known scene structure. 2D contour-based model retrieval is then used to recover smaller objects. Evaluations using our own data and two public datasets show that our approach can model typical real-world indoor scenes efficiently and robustly.",
"We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.",
"Large-scale acquisition of exterior urban environments is by now a well-established technology, supporting many applications in search, navigation, and commerce. The same is, however, not the case for indoor environments, where access is often restricted and the spaces are cluttered. Further, such environments typically contain a high density of repeated objects (e.g., tables, chairs, monitors, etc.) in regular or non-regular arrangements with significant pose variations and articulations. In this paper, we exploit the special structure of indoor environments to accelerate their 3D acquisition and recognition with a low-end handheld scanner. Our approach runs in two phases: (i) a learning phase wherein we acquire 3D models of frequently occurring objects and capture their variability modes from only a few scans, and (ii) a recognition phase wherein from a single scan of a new area, we identify previously seen objects but in different poses and locations at an average recognition time of 200ms model. We evaluate the robustness and limits of the proposed recognition system using a range of synthetic and real world scans under challenging settings.",
"We propose a real-time approach for indoor scene reconstruction. It is capable of producing a ready-to-use 3D geometric model even while the user is still scanning the environment with a consumer depth camera. Our approach features explicit representations of planar regions and nonplanar objects extracted from the noisy feed of the depth camera, via an online structure analysis on the dynamic, incomplete data. The structural information is incorporated into the volumetric representation of the scene, resulting in a seamless integration with KinectFusion's global data structure and an efficient implementation of the whole reconstruction process. Moreover, heuristics based on rectilinear shapes in typical indoor scenes effectively eliminate camera tracking drift and further improve reconstruction accuracy. The instantaneous feedback enabled by our on-the-fly structure analysis, including repeated object recognition, allows the user to selectively scan the scene and produce high-fidelity large-scale models efficiently. We demonstrate the capability of our system with real-life examples.",
"We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction."
]
} |
1610.05156 | 2950795086 | We consider the problem of inferring a grammar describing the output of a functional program given a grammar describing its input. Solutions to this problem are helpful for detecting bugs or proving safety properties of functional programs, and several rewriting tools exist for solving this problem. However, known grammar inference techniques are not able to take evaluation strategies of the program into account. This yields very imprecise results when the evaluation strategy matters. In this work, we adapt the Tree Automata Completion algorithm to approximate accurately the set of terms reachable by rewriting under the innermost strategy. We formally prove that the proposed technique is sound and precise w.r.t. innermost rewriting. We show that those results can be extended to the leftmost and rightmost innermost case. The algorithms for the general innermost case have been implemented in the Timbuk reachability tool. Experiments show that it noticeably improves the accuracy of static analysis for functional programs using the call-by-value evaluation strategy. | Dealing with reachable terms and strategies was first addressed in @cite_40 in the exact case for innermost and outermost strategies but only for some restricted classes of TRSs, and also in @cite_30 . As far as we know, the technique we propose is the first to over-approximate terms reachable by innermost rewriting for any left-linear TRSs. For instance, Example and examples of and are in the scope of innermost equational completion but are outside of the classes of @cite_40 @cite_30 . For instance, the sum example is outside of classes of @cite_40 @cite_30 because a right-hand side of a rule has two nested defined symbols and is not shallow. | {
"cite_N": [
"@cite_30",
"@cite_40"
],
"mid": [
"2012732732",
"1486456646"
],
"abstract": [
"Preservation of regularity by a term rewriting system (TRS) states that the set of reachable terms from a tree automata (TA) language (aka regular term set) is also a TA language. It is an important and useful property, and there have been many works on identifying classes of TRS ensuring it; unfortunately, regularity is not preserved for restricted classes of TRS like shallow TRS. Nevertheless, this property has not been studied for important strategies of rewriting like the innermost strategy - which corresponds to the call by value computation of programming languages. We prove that the set of innermost-reachable terms from a TA language by a shallow TRS is not necessarily regular, but it can be recognized by a TA with equality and disequality constraints between brothers. As a consequence we conclude decidability of regularity of the reachable set of terms from a TA language by innermost rewriting and shallow TRS. This result is in contrast with plain (not necessarily innermost) rewriting for which we prove undecidability. We also show that, like for plain rewriting, innermost rewriting with linear and right-shallow TRS preserves regularity.",
"For a constructor-based rewrite system R, a regular set of ground terms E, and assuming some additional restrictions, we build a finite tree automaton that recognizes the descendants of E, i.e. the terms issued from E by rewriting, according to innermost, innermost-leftmost, and outermost strategies."
]
} |
1610.05156 | 2950795086 | We consider the problem of inferring a grammar describing the output of a functional program given a grammar describing its input. Solutions to this problem are helpful for detecting bugs or proving safety properties of functional programs, and several rewriting tools exist for solving this problem. However, known grammar inference techniques are not able to take evaluation strategies of the program into account. This yields very imprecise results when the evaluation strategy matters. In this work, we adapt the Tree Automata Completion algorithm to approximate accurately the set of terms reachable by rewriting under the innermost strategy. We formally prove that the proposed technique is sound and precise w.r.t. innermost rewriting. We show that those results can be extended to the leftmost and rightmost innermost case. The algorithms for the general innermost case have been implemented in the Timbuk reachability tool. Experiments show that it noticeably improves the accuracy of static analysis for functional programs using the call-by-value evaluation strategy. | Data flow analysis of higher-order functional programs is a long standing and very active research topic @cite_21 @cite_5 @cite_24 . Used techniques range from tree grammars to specific formalisms: HORS, PMRS or ILTGs and can deal with higher-order functions. We have shown, on an example, that defining an analysis taking the call-by-value evaluation strategy was also possible on higher-order functions. However, this has to be investigated more deeply. Application of innermost completion to higher order function would provide nice improvements on static analysis techniques. Indeed, state of the art techniques like @cite_21 @cite_5 @cite_24 do not take evaluation strategies into account, and analysis results are thus coarse when program execution relies on a specific strategy. | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_21"
],
"mid": [
"2006990447",
"",
"2149335820"
],
"abstract": [
"In recent years much interest has been shown in a class of functional languages including HASKELL, lazy ML, SASL KRC MIRANDA, ALFL, ORWELL, and PONDER. It has been seen that their expressive power is great, programs are compact, and program manipulation and transformation is much easier than with imperative languages or more traditional applicative ones. Common characteristics: they are purely applicative, manipulate trees as data objects, use pattern matching both to determine control flow and to decompose compound data structures, and use a ''lazy'' evaluation strategy. In this paper we describe a technique for data flow analysis of programs in this class by safely approximating the behavior of a certain class of term rewriting systems. In particular we obtain ''safe'' descriptions of program inputs, outputs and intermediate results by regular sets of trees. Potential applications include optimization, strictness analysis and partial evaluation. The technique improves earlier work because of its applicability to programs with higher-order functions, and with either eager or lazy evaluation. The technique addresses the call-by-name aspect of laziness, but not memoization.",
"",
"Type-based model checking algorithms for higher-order recursion schemes have recently emerged as a promising approach to the verification of functional programs. We introduce pattern-matching recursion schemes (PMRS) as an accurate model of computation for functional programs that manipulate algebraic data-types. PMRS are a natural extension of higher-order recursion schemes that incorporate pattern-matching in the defining rules. This paper is concerned with the following (undecidable) verification problem: given a correctness property φ, a functional program ℘ (qua PMRS) and a regular input set ℑ, does every term that is reachable from ℑ under rewriting by ℘ satisfy φ? To solve the PMRS verification problem, we present a sound semi-algorithm which is based on model-checking and counterexample guided abstraction refinement. Given a no-instance of the verification problem, the method is guaranteed to terminate. From an order-n PMRS and an input set generated by a regular tree grammar, our method constructs an order-n weak PMRS which over-approximates only the first-order pattern-matching behaviour, whilst remaining completely faithful to the higher-order control flow. Using a variation of Kobayashi's type-based approach, we show that the (trivial automaton) model-checking problem for weak PMRS is decidable. When a violation of the property is detected in the abstraction which does not correspond to a violation in the model, the abstraction is automatically refined by unfolding' the pattern-matching rules in the program to give successively more and more accurate weak PMRS models."
]
} |
1610.05243 | 2532807140 | Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result. | The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) @cite_12 @cite_9 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems. | {
"cite_N": [
"@cite_9",
"@cite_12"
],
"mid": [
"2153653739",
"2144091461"
],
"abstract": [
"We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models out-perform word-based models. Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems.",
"This article describes the combination of a SYSTRAN system with a \"statistical post-editing\" (SPE) system. We document qualitative analysis on two experiments performed in the shared task of the ACL 2007 Workshop on Statistical Machine Translation. Comparative results and more integrated \"hybrid\" techniques are discussed."
]
} |
1610.05243 | 2532807140 | Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result. | Those experiments promote the area of automatic post-editing @cite_18 . Recently, it was shown that models based on neural MT are very successful in this task @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_13"
],
"mid": [
"2251994258",
"2405897494"
],
"abstract": [
"This paper presents the results of the WMT15 shared tasks, which included a standard news translation task, a metrics task, a tuning task, a task for run-time estimation of machine translation quality, and an automatic post-editing task. This year, 68 machine translation systems from 24 institutions were submitted to the ten translation directions in the standard translation task. An additional 7 anonymized systems were included, and were then evaluated both automatically and manually. The quality estimation task had three subtasks, with a total of 10 teams, submitting 34 entries. The pilot automatic postediting task had a total of 4 teams, submitting 7 entries.",
"This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by -3.2 TER and +5.5 BLEU and outperforms any other system submitted to the shared-task by a large margin."
]
} |
1610.05243 | 2532807140 | Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result. | In addition, the usefulness of using the translations of the training data of a PBMT system has been shown. The translations have been used to re-train the translation model @cite_10 or to train additional discriminative translation models @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_10"
],
"mid": [
"2144725461",
"2123635983"
],
"abstract": [
"The Discriminative Word Lexicon (DWL) is a maximum-entropy model that predicts the target word probability given the source sentence words. We present two ways to extend a DWL to improve its ability to model the word translation probability in a phrase-based machine translation (PBMT) system. While DWLs are able to model the global source information, they ignore the structure of the source and target sentence. We propose to include this structure by modeling the source sentence as a bag-of-n-grams and features depending on the surrounding target words. Furthermore, as the standard DWL does not get any feedback from the MT system, we change the DWL training process to explicitly focus on addressing MT errors.",
"Several attempts have been made to learn phrase translation probabilities for phrase-based statistical machine translation that go beyond pure counting of phrases in word-aligned training data. Most approaches report problems with over-fitting. We describe a novel leaving-one-out approach to prevent over-fitting that allows us to train phrase models that show improved translation performance on the WMT08 Europarl German-English task. In contrast to most previous work where phrase models were trained separately from other models used in translation, we include all components such as single word lexica and reordering models in training. Using this consistent training of phrase models we are able to achieve improvements of up to 1.4 points in BLEU. As a side effect, the phrase table size is reduced by more than 80 ."
]
} |
1610.05243 | 2532807140 | Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result. | In order to improve the translation of rare words in NMT, authors try to translate words that are not in the vocabulary in a post-processing step @cite_17 . In @cite_8 , a method to split words into sub-word units was presented to limit the vocabulary size. Also the integration of lexical probabilities into NMT was successfully investigated @cite_15 . | {
"cite_N": [
"@cite_8",
"@cite_15",
"@cite_17"
],
"mid": [
"1816313093",
"2410217169",
"2950580142"
],
"abstract": [
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.",
"Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.",
"Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT14 contest task."
]
} |
1610.05350 | 2949317190 | Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing --and yet poorly understood-- dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erd o s-Renyi random graphs with bounded average degree @math , and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor @math of the upper bound. In particular, the local algorithm is at most @math suboptimal, and @math suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erd o s-Renyi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. | The SDP relaxation ) has attracted a significant amount of work since Goemans-Williamson's seminal work on the MAXCUT problem @cite_6 . In the last few years, several authors used this approach for clustering or community detection and derived optimality or near-optimality guarantees. An incomplete list includes @cite_19 @cite_1 @cite_18 @cite_32 @cite_8 . Under the assumption that @math is generated according to the stochastic block model (whose two-groups version was introduced in Section ), these papers provide conditions under which the SDP approach recovers the vertex labels. This can be regarded as a high signal-to-noise ratio' regime, in which (with high probability) the SDP solution has rank one and is deterministic (i.e. independent of the graph realization). In contrast, we focus on the pure noise' scenario in which @math is an random graph, or on the two-groups stochastic block-model @math close to the detection threshold. In this regime, the SDP optimum has rank larger than one and is non-deterministic. The only papers that have addressed this regime using SDP are @cite_3 @cite_28 , discussed previously. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_19"
],
"mid": [
"",
"2952769483",
"2216459995",
"2963264680",
"1868531013",
"1985123706",
"",
""
],
"abstract": [
"",
"We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: @math -means and @math -median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are @math clusters in @math and data from each cluster consists of @math points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these @math clusters as the optimal integral solution? For the @math -median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation @math between the balls. In other words, the pairwise center separation is @math . Under the same distributional model, the @math -means LP relaxation fails to recover such clusters at separation as large as @math . Yet, if we enforce PSD constraints on the @math -means LP, we get exact cluster recovery at center separation @math . In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the @math -means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.",
"Denote by A the adjacency matrix of an Erdos-Renyi graph with bounded average degree. We consider the problem of maximizing over the set of positive semidefinite matrices X with diagonal entries X_ii=1. We prove that for large (bounded) average degree d, the value of this semidefinite program (SDP) is --with high probability-- 2n*sqrt(d) + n, o(sqrt(d))+o(n). For a random regular graph of degree d, we prove that the SDP value is 2n*sqrt(d-1)+o(n), matching a spectral upper bound. Informally, Erdos-Renyi graphs appear to behave similarly to random regular graphs for semidefinite programming. We next consider the sparse, two-groups, symmetric community detection problem (also known as planted partition). We establish that SDP achieves the information-theoretically optimal detection threshold for large (bounded) degree. Namely, under this model, the vertex set is partitioned into subsets of size n 2, with edge probability a n (within group) and b n (across). We prove that SDP detects the partition with high probability provided (a-b)^2 (4d)> 1+o_d(1), with d= (a+b) 2. By comparison, the information theoretic threshold for detecting the hidden partition is (a-b)^2 (4d)> 1: SDP is nearly optimal for large bounded average degree. Our proof is based on tools from different research areas: (i) A new 'higher-rank' Grothendieck inequality for symmetric matrices; (ii) An interpolation method inspired from statistical physics; (iii) An analysis of the eigenvectors of deformed Gaussian random matrices.",
"The stochastic block model with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behavior. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability @math within clusters and @math across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower bounds on the scaling of @math to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if @math and @math are constant (with @math ), recovering the communities with high probability is possible if @math and is impossible if $( + 2 ) - . In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest that it may achieve the threshold. An efficient algorithm that succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.",
"Resolving a conjecture of Abbe, Bandeira and Hall, the authors have recently shown that the semidefinite programming (SDP) relaxation of the maximum likelihood estimator achieves the sharp threshold for exactly recovering the community structure under the binary stochastic block model of two equal-sized clusters. The same was shown for the case of a single cluster and outliers. Extending the proof techniques, in this paper it is shown that SDP relaxations also achieve the sharp recovery threshold in the following cases: (1) Binary stochastic block model with two clusters of sizes proportional to network size but not necessarily equal; (2) Stochastic block model with a fixed number of equal-sized clusters; (3) Binary censored block model with the background graph being Erd o s-R 'enyi. Furthermore, a sufficient condition is given for an SDP procedure to achieve exact recovery for the general case of a fixed number of clusters plus outliers. These results demonstrate the versatility of SDP relaxation as a simple, general purpose, computationally feasible methodology for community detection.",
"We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least.87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of 1 2 for MAX CUT and 3 4 or MAX 2SAT. Slight extensions of our analysis lead to a.79607-approximation algorithm for the maximum directed cut problem (MAX DICUT) and a.758-approximation algorithm for MAX SAT, where the best previously known approximation algorithms had performance guarantees of 1 4 and 3 4, respectively. Our algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.",
"",
""
]
} |
1610.05350 | 2949317190 | Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing --and yet poorly understood-- dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erd o s-Renyi random graphs with bounded average degree @math , and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor @math of the upper bound. In particular, the local algorithm is at most @math suboptimal, and @math suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erd o s-Renyi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. | Several papers applied sophisticated spectral methods to the stochastic block model near the detection threshold @cite_37 @cite_29 @cite_39 . Our upper bound in Theorem is based on a duality argument, where we establish feasibility of a certain dual witness construction using an argument similar to @cite_39 . | {
"cite_N": [
"@cite_37",
"@cite_29",
"@cite_39"
],
"mid": [
"2023348178",
"1914749871",
""
],
"abstract": [
"[1] conjectured the existence of a sharp threshold on model parameters for community detection in sparse random graphs drawn from the stochastic block model. Mossel, Neeman and Sly [2] established the negative part of the conjecture, proving impossibility of non-trivial reconstruction below the threshold. In this work we solve the positive part of the conjecture. To that end we introduce a modified adjacency matrix B which counts self-avoiding paths of a given length e between pairs of nodes. We then prove that for logarithmic length e, the leading eigenvectors of this modified matrix provide a non-trivial reconstruction of the underlying structure, thereby settling the conjecture. A key step in the proof consists in establishing a weak Ramanujan property of the constructed matrix B. Namely, the spectrum of B consists in two leading eigenvalues ρ(B), λ2 and n -- 2 eigenvalues of a lower order O(ne √ρ(B) for all e 0, ρ(B) denoting B's spectral radius.",
"We study a random graph model named the \"block model\" in statistics and the \"planted partition model\" in theoretical computer science. In its simplest form, this is a random graph with two equal-sized clusters, with a between-class edge probability of @math and a within-class edge probability of @math . A striking conjecture of Decelle, Krzkala, Moore and Zdeborov 'a based on deep, non-rigorous ideas from statistical physics, gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if @math and @math , @math and @math then conjectured that it is possible to efficiently cluster in a way correlated with the true partition if @math and impossible if @math for some sufficiently large @math . In a previous work, we proved that indeed it is information theoretically impossible to to cluster if @math . A different independent proof of the same result was recently obtained by Laurent Massoulie.",
""
]
} |
1610.05350 | 2949317190 | Several probabilistic models from high-dimensional statistics and machine learning reveal an intriguing --and yet poorly understood-- dichotomy. Either simple local algorithms succeed in estimating the object of interest, or even sophisticated semi-definite programming (SDP) relaxations fail. In order to explore this phenomenon, we study a classical SDP relaxation of the minimum graph bisection problem, when applied to Erd o s-Renyi random graphs with bounded average degree @math , and obtain several types of results. First, we use a dual witness construction (using the so-called non-backtracking matrix of the graph) to upper bound the SDP value. Second, we prove that a simple local algorithm approximately solves the SDP to within a factor @math of the upper bound. In particular, the local algorithm is at most @math suboptimal, and @math suboptimal for large degree. We then analyze a more sophisticated local algorithm, which aggregates information according to the harmonic measure on the limiting Galton-Watson (GW) tree. The resulting lower bound is expressed in terms of the conductance of the GW tree and matches surprisingly well the empirically determined SDP values on large-scale Erd o s-Renyi graphs. We finally consider the planted partition model. In this case, purely local algorithms are known to fail, but they do succeed if a small amount of side information is available. Our results imply quantitative bounds on the threshold for partial recovery using SDP in this model. | By construction, local algorithms can be applied to infinite random graphs, and have a well defined value provided the graph distribution is unimodular (see below). Asymptotic results for graph sequences can be read-off' these infinite-graph settings (our proofs will use this device multiple times). In this context, the (random) solutions generated by local algorithms, together with their limits in the weak topology, are referred to as factors of i.i.d. processes' @cite_14 . | {
"cite_N": [
"@cite_14"
],
"mid": [
"2110303238"
],
"abstract": [
"Classical ergodic theory for integer-group actions uses entropy as a complete invariant for isomorphism of IID (independent, identically distributed) processes (a.k.a. product measures). This theory holds for amenable groups as well. Despite recent spectacular progress of Bowen, the situation for non-amenable groups, including free groups, is still largely mysterious. We present some illustrative results and open questions on free groups, which are particularly interesting in combinatorics, statistical physics, and probability. Our results include bounds on minimum and maximum bisection for random cubic graphs that improve on all past bounds."
]
} |
1610.04787 | 2949729859 | Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal aYahoo. Our approach outperforms state-of-the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin. | In ZSL the set of train and test classes are disjoint. That is, while we have many labeled samples of the train classes to learn a visual model, we have never observed examples of the test class (a.k.a. unseen class). In order to construct a visual model for the unseen class, we first need to establish its relation to the visual knowledge that is obtained from the training data. One of the prominent approaches in the literature is attribute-based ZSL. Attributes describe visual aspects of the object, like its shape, texture and parts @cite_7 . Hence, the recognition paradigm is shifted from labeling to describing @cite_8 @cite_27 @cite_18 . In particular, attributes act as an intermediate semantic representation that can be easily transferred and shared with new visual concepts @cite_36 @cite_19 @cite_5 . In ZSL, attributes have been used either directly @cite_19 @cite_5 @cite_26 , guided by hierarchical information @cite_41 , or in transductive settings @cite_37 @cite_22 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_37",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_41",
"@cite_19",
"@cite_27",
"@cite_5"
],
"mid": [
"2019524107",
"2064851185",
"43954826",
"2151575489",
"2125560515",
"1946323491",
"",
"2005285092",
"2098411764",
"2112308483",
"2134270519"
],
"abstract": [
"When describing images, humans tend not to talk about the obvious, but rather mention what they find interesting. We argue that abnormalities and deviations from typicalities are among the most important components that form what is worth mentioning. In this paper we introduce the abnormality detection as a recognition problem and show how to model typicalities and, consequently, meaningful deviations from prototypical properties of categories. Our model can recognize abnormalities and report the main reasons of any recognized abnormality. We also show that abnormality predictions can help image categorization. We introduce the abnormality detection dataset and show interesting results on how to reason about abnormalities.",
"In this paper we explore the idea of using high-level semantic concepts, also called attributes, to represent human actions from videos and argue that attributes enable the construction of more descriptive models for human action recognition. We propose a unified framework wherein manually specified attributes are: i) selected in a discriminative fashion so as to account for intra-class variability; ii) coherently integrated with data-driven attributes to make the attribute set more descriptive. Data-driven attributes are automatically inferred from the training data using an information theoretic approach. Our framework is built upon a latent SVM formulation where latent variables capture the degree of importance of each attribute for each action class. We also demonstrate that our attribute-based action representation can be effectively used to design a recognition procedure for classifying novel action classes for which no training samples are available. We test our approach on several publicly available datasets and obtain promising results that quantitatively demonstrate our theoretical claims.",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate-level semantic representation such as visual attributes or semantic word vectors. Such a semantic representation is shared between an annotated auxiliary dataset and a target dataset with no annotation. A projection from a low-level feature space to the semantic space is learned from the auxiliary dataset and is applied without adaptation to the target dataset. In this paper we identify an inherent limitation with this approach. That is, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. It is ‘transductive’ in that unlabelled target data points are explored for projection adaptation, and ‘multi-view’ in that both low-level feature (view) and multiple semantic representations (views) are embedded to rectify the projection shift. We demonstrate through extensive experiments that our framework (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) achieves state-of-the-art recognition results on image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.",
"Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.",
"We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as 'red', 'striped', or 'spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.",
"We address the problem of describing people based on fine-grained clothing attributes. This is an important problem for many practical applications, such as identifying target suspects or finding missing people based on detailed clothing descriptions in surveillance videos or consumer photos. We approach this problem by first mining clothing images with fine-grained attribute labels from online shopping stores. A large-scale dataset is built with about one million images and fine-detailed attribute sub-categories, such as various shades of color (e.g., watermelon red, rosy red, purplish red), clothing types (e.g., down jacket, denim jacket), and patterns (e.g., thin horizontal stripes, houndstooth). As these images are taken in ideal pose lighting background conditions, it is unreliable to directly use them as training data for attribute prediction in the domain of unconstrained images captured, for example, by mobile phones or surveillance cameras. In order to bridge this gap, we propose a novel double-path deep domain adaptation network to model the data from the two domains jointly. Several alignment cost layers placed inbetween the two columns ensure the consistency of the two domain features and the feasibility to predict unseen attribute categories in one of the domains. Finally, to achieve a working system with automatic human body alignment, we trained an enhanced RCNN-based detector to localize human bodies in images. Our extensive experimental evaluation demonstrates the effectiveness of the proposed approach for describing people based on fine-grained clothing attributes.",
"",
"Attribute based knowledge transfer has proven very successful in visual object analysis and learning previously unseen classes. However, the common approach learns and transfers attributes without taking into consideration the embedded structure between the categories in the source set. Such information provides important cues on the intraattribute variations. We propose to capture these variations in a hierarchical model that expands the knowledge source with additional abstraction levels of attributes. We also provide a novel transfer approach that can choose the appropriate attributes to be shared with an unseen class. We evaluate our approach on three public datasets: a Pascal, Animals with Attributes and CUB-200-2011 Birds. The experiments demonstrate the effectiveness of our model with significant improvement over state-of-the-art.",
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"Visual attributes are powerful features for many different applications in computer vision such as object detection and scene recognition. Visual attributes present another application that has not been examined as rigorously: verbal communication from a computer to a human. Since many attributes are nameable, the computer is able to communicate these concepts through language. However, this is not a trivial task. Given a set of attributes, selecting a subset to be communicated is task dependent. Moreover, because attribute classifiers are noisy, it is important to find ways to deal with this uncertainty. We address the issue of communication by examining the task of composing an automatic description of a person in a group photo that distinguishes him from the others. We introduce an efficient, principled method for choosing which attributes are included in a short description to maximize the likelihood that a third party will correctly guess to which person the description refers. We compare our algorithm to computer baselines and human describers, and show the strength of our method in creating effective descriptions.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes."
]
} |
1610.04787 | 2949729859 | Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal aYahoo. Our approach outperforms state-of-the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin. | However, most attribute-based ZSL approaches rely on the underlying assumption that for an unseen class the complete information about attribute associations are manually defined @cite_19 or imported from expert-based knowledge sources @cite_5 @cite_38 . This is a hindering assumption since the common user is unlikely to have such a knowledge or is simply unwilling to manually set hundreds of associations for each new category. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_38"
],
"mid": [
"2098411764",
"2134270519",
""
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
""
]
} |
1610.04787 | 2949729859 | Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal aYahoo. Our approach outperforms state-of-the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin. | Towards simplifying the required user involvement, given an unseen class @cite_35 reduces the level of user intervention by asking the operator to select the most similar seen classes and then inferring its expected attributes. @cite_2 @cite_11 go a step further and propose an unsupervised approach to automatically learn the class-attribute association strength by using text-based semantic relatedness measures and co-occurrence statistics obtained from web-search hit counts. However, as web data is noisy, class and attribute terms can appear in documents in different contexts which are not necessarily related to the original attribute relation we seek. We demonstrate in this work, that the class-attribute relations are complex and it is hard to model them by simple statistics of co-occurrence. | {
"cite_N": [
"@cite_35",
"@cite_11",
"@cite_2"
],
"mid": [
"2157032868",
"1992454046",
""
],
"abstract": [
"Attribute-based representation has shown great promises for visual recognition due to its intuitive interpretation and cross-category generalization property. However, human efforts are usually involved in the attribute designing process, making the representation costly to obtain. In this paper, we propose a novel formulation to automatically design discriminative \"category-level attributes\", which can be efficiently encoded by a compact category-attribute matrix. The formulation allows us to achieve intuitive and critical design criteria (category-separability, learn ability) in a principled way. The designed attributes can be used for tasks of cross-category knowledge transfer, achieving superior performance over well-known attribute dataset Animals with Attributes (AwA) and a large-scale ILSVRC2010 dataset (1.2M images). This approach also leads to state-of-the-art performance on the zero-shot learning task on AwA.",
"Remarkable performance has been reported to recognize single object classes. Scalability to large numbers of classes however remains an important challenge for today's recognition methods. Several authors have promoted knowledge transfer between classes as a key ingredient to address this challenge. However, in previous work the decision which knowledge to transfer has required either manual supervision or at least a few training examples limiting the scalability of these approaches. In this work we explicitly address the question of how to automatically decide which information to transfer between classes without the need of any human intervention. For this we tap into linguistic knowledge bases to provide the semantic link between sources (what) and targets (where) of knowledge transfer. We provide a rigorous experimental evaluation of different knowledge bases and state-of-the-art techniques from Natural Language Processing which goes far beyond the limited use of language in related work. We also give insights into the applicability (why) of different knowledge sources and similarity measures for knowledge transfer.",
""
]
} |
1610.04787 | 2949729859 | Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal aYahoo. Our approach outperforms state-of-the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin. | In a different direction, unsupervised ZSL can be conducted by exploiting lexical hierarchies. For example, @cite_30 uses WordNet @cite_21 to find a set of ancestor categories of the novel class and transfer their visual models accordingly. Likewise, @cite_41 uses the hierarchy to transfer the attribute associations of an unseen class from its seen parent in the ontology. @cite_15 , WordNet is used to capture semantic similarity among classes in a structured joint embedding framework. However, categories that are close to each other in the graph ( siblings) often exhibit similar properties to their ancestors making it hard to discriminate among them. Moreover, ontologies like WordNet are not complete. Many classes ( fine-grained) are not present in the hierarchy. | {
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_15",
"@cite_21"
],
"mid": [
"2077071968",
"2005285092",
"",
"2081580037"
],
"abstract": [
"While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study.",
"Attribute based knowledge transfer has proven very successful in visual object analysis and learning previously unseen classes. However, the common approach learns and transfers attributes without taking into consideration the embedded structure between the categories in the source set. Such information provides important cues on the intraattribute variations. We propose to capture these variations in a hierarchical model that expands the knowledge source with additional abstraction levels of attributes. We also provide a novel transfer approach that can choose the appropriate attributes to be shared with an unseen class. We evaluate our approach on three public datasets: a Pascal, Animals with Attributes and CUB-200-2011 Birds. The experiments demonstrate the effectiveness of our model with significant improvement over state-of-the-art.",
"",
"Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4]."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | The classic deterministic method for dimensionality reduction is MVU @cite_14 . Its objective is to maximize the variance of the embedded points subject to constraints such that distances between nearby inputs are preserved. MVU consists of three steps. The first step is to compute the @math -nearest neighbors @math of data point @math . The second step is to solve the following optimization problem | {
"cite_N": [
"@cite_14"
],
"mid": [
"2017588182"
],
"abstract": [
"We investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. Noting that the kernel matrix implicitly maps the data into a nonlinear feature space, we show how to discover a mapping that \"unfolds\" the underlying manifold from which the data was sampled. The kernel matrix is constructed by maximizing the variance in feature space subject to local constraints that preserve the angles and distances between nearest neighbors. The main optimization involves an instance of semidefinite programming---a fundamentally different computation than previous algorithms for manifold learning, such as Isomap and locally linear embedding. The optimized kernels perform better than polynomial and Gaussian kernels for problems in manifold learning, but worse for problems in large margin classification. We explain these results in terms of the geometric properties of different kernels and comment on various interpretations of other manifold learning algorithms as kernel methods."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | where constraints ) preserve distances between @math -nearest neighbors and constraint ) eliminates the translational degree of freedom on the embedded data points by constraining them to be centered at the origin. Instead of optimizing over @math , MVU reformulates ) as a semidefinite programming by learning a kernel matrix @math with the @math th element denoted by @math with a semidefinite constraint @math for a valid kernel @cite_52 where the corresponding mapping function lies in a RKHS @math . Define @math and @math . The resulting semidefinite programming is | {
"cite_N": [
"@cite_52"
],
"mid": [
"1560724230"
],
"abstract": [
"From the Publisher: In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | where @math is a relaxation of ) for ease of kernelization. The last step is to obtain the embedding @math by applying KPCA on the optimal @math . The distance similarity information on a neighborhood graph is widely used in the manifold-based dimensionality reduction methods such as locally linear embedding (LLE) and its variants @cite_7 , and Laplacian Eigenmap (LE) @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_7"
],
"mid": [
"2156718197",
"2063532964"
],
"abstract": [
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE---though capable of generating highly nonlinear embeddings---are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance---both successes and failures---and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | A duality view of MVU problem has been studied in @cite_19 . Define @math as an @math matrix consisting of only four nonzero elements: @math . The preserving constraints can be rewritten as @math . Thus, the dual problem of the above semidefinite programming is given by | {
"cite_N": [
"@cite_19"
],
"mid": [
"2155402588"
],
"abstract": [
"We present a unified duality view of several recently emerged spectral methods for nonlinear dimensionality reduction, including Isomap, locally linear embedding, Laplacian eigenmaps, and maximum variance unfolding. We discuss the duality theory for the maximum variance unfolding problem, and show that other methods are directly related to either its primal formulation or its dual formulation, or can be interpreted from the optimality conditions. This duality framework reveals close connections between these seemingly quite different algorithms. In particular, it resolves the myth about these methods in using either the top eigenvectors of a dense matrix, or the bottom eigenvectors of a sparse matrix --- these two eigenspaces are exactly aligned at primal-dual optimality."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | where @math is the dual variable subject to the preserving constraint associated to edge @math , and @math denotes the second smallest eigenvalue of a symmetric matrix @cite_19 .. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2155402588"
],
"abstract": [
"We present a unified duality view of several recently emerged spectral methods for nonlinear dimensionality reduction, including Isomap, locally linear embedding, Laplacian eigenmaps, and maximum variance unfolding. We discuss the duality theory for the maximum variance unfolding problem, and show that other methods are directly related to either its primal formulation or its dual formulation, or can be interpreted from the optimality conditions. This duality framework reveals close connections between these seemingly quite different algorithms. In particular, it resolves the myth about these methods in using either the top eigenvectors of a dense matrix, or the bottom eigenvectors of a sparse matrix --- these two eigenspaces are exactly aligned at primal-dual optimality."
]
} |
1610.04929 | 2951749249 | We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as two coupled learning problem, i.e., structure learning and the learning of projection matrix. This interpretation motivates the learning of the embedding points that can directly form an explicit graph structure. We develop a new method to learn the embedding points that form a spanning tree, which is further extended to obtain a discriminative and compact feature representation for clustering problems. Unlike traditional clustering methods, we assume that centers of clusters should be close to each other if they are connected in a learned graph, and other cluster centers should be distant. This can greatly facilitate data visualization and scientific discovery in downstream analysis. Extensive experiments are performed that demonstrate that the proposed framework is able to obtain discriminative feature representations, and correctly recover the intrinsic structures of various real-world datasets. | GPLVM @cite_31 takes an alternative way to obtain marginal likelihood of data by marginalizing out @math and optimizing with respect to @math . Assume the prior distribution of @math as | {
"cite_N": [
"@cite_31"
],
"mid": [
"2136111243"
],
"abstract": [
"Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets."
]
} |
1610.04972 | 2951684871 | Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been very few prior studies that take into account the attacker's tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is key to derive solutions that perform well in practice. In this investigation we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker's objective balances the benefit from attacks and the cost of being detected while the defender's objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies. | The games mentioned above are nonzero-sum games. A number of other nonzero-sum games have been discussed in the literature under the term security games'', see @cite_7 ; but they share the restriction of @cite_16 that the payoff is the sum of the payoffs on each targets, which does not model the problem of adversarial classification well. Let us finally mention that many applications of nonzero-sum games consider Stackelberg equilibria in which the defender chooses first its mixed action @cite_7 @cite_50 . In our model, we consider a simultaneous move game and find the Nash equilibria. We find that our nonzero-sum game is strategically equivalent to a zero-sum game, and thus we can analyze our nonzero-sum game using a zero-sum game. | {
"cite_N": [
"@cite_50",
"@cite_16",
"@cite_7"
],
"mid": [
"",
"2160502705",
"2126879264"
],
"abstract": [
"",
"Due to the dynamic, distributed, and heterogeneous nature of today's networks, intrusion detection systems (IDSs) have become a necessary addition to the security infrastructure and are widely deployed as a complementary line of defense to classical security approaches. In this paper, we address the intrusion detection problem in heterogeneous networks consisting of nodes with different noncorrelated security assets. In our study, two crucial questions are: What are the expected behaviors of rational attackers? What is the optimal strategy of the defenders (IDSs)? We answer the questions by formulating the network intrusion detection as a noncooperative game and performing an in-depth analysis on the Nash equilibrium and the engineering implications behind. Based on our game theoretical analysis, we derive the expected behaviors of rational attackers, the minimum monitor resource requirement, and the optimal strategy of the defenders. We then provide guidelines for IDS design and deployment. We also show how our game theoretical framework can be applied to configure the intrusion detection strategies in realistic scenarios via a case study. Finally, we evaluate the proposed game theoretical framework via simulations. The simulation results show both the correctness of the analytical results and the effectiveness of the proposed guidelines.",
"There has been significant recent interest in game-theoretic approaches to security, with much of the recent research focused on utilizing the leader-follower Stackelberg game model. Among the major applications are the ARMOR program deployed at LAX Airport and the IRIS program in use by the US Federal Air Marshals (FAMS). The foundational assumption for using Stackelberg games is that security forces (leaders), acting first, commit to a randomized strategy; while their adversaries (followers) choose their best response after surveillance of this randomized strategy. Yet, in many situations, a leader may face uncertainty about the follower's surveillance capability. Previous work fails to address how a leader should compute her strategy given such uncertainty. We provide five contributions in the context of a general class of security games. First, we show that the Nash equilibria in security games are interchangeable, thus alleviating the equilibrium selection problem. Second, under a natural restriction on security games, any Stackelberg strategy is also a Nash equilibrium strategy; and furthermore, the solution is unique in a class of security games of which ARMOR is a key exemplar. Third, when faced with a follower that can attack multiple targets, many of these properties no longer hold. Fourth, we show experimentally that in most (but not all) games where the restriction does not hold, the Stackelberg strategy is still a Nash equilibrium strategy, but this is no longer true when the attacker can attack multiple targets. Finally, as a possible direction for future research, we propose an extensive-form game model that makes the defender's uncertainty about the attacker's ability to observe explicit."
]
} |
1610.04972 | 2951684871 | Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been very few prior studies that take into account the attacker's tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is key to derive solutions that perform well in practice. In this investigation we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker's objective balances the benefit from attacks and the cost of being detected while the defender's objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies. | The classification game we investigate is similar in nature to the inspection game, a multi-stage game between a customs inspector and a smuggler, proposed and studied by Dresher @cite_27 and Maschler @cite_39 . @cite_8 find the equilibrium of the general nonzero-sum game by using an auxiliary zero-sum game in which the inspectee chooses a violation procedure and the inspector chooses a statistical test with a given false alarm probability. We do not separate the general nonzero-sum game into two games but show the equivalence to a zero-sum game and provide structure to the equilibrium strategies of a single-shot simultaneous-move game. | {
"cite_N": [
"@cite_8",
"@cite_27",
"@cite_39"
],
"mid": [
"69361866",
"266047537",
"2164504122"
],
"abstract": [
"Abstract Starting with the analysis of arms control and disarmament problems in the sixties, inspection games have evolved into a special area of game theory with specific theoretical aspects, and, equally important, practical applications in various fields of human activity where inspection is mandatory. In this contribution, a survey of applications is given first. These include arms control and disarmament, theoretical approaches to auditing and accounting, for example in insurance, and problems of environmental surveillance. Then, the general problem of inspection is presented in a game-theoretic framework that extends a statistical hypothesis testing problem. This defines a game since the data can be strategically manipulated by an inspectee who wants to conceal illegal actions. Using this framework, two models are solved, which are practically significant and technically interesting: material accountancy and data verification. A second important aspect of inspection games is the fact that inspection resources are limited and have to be used strategically. This is demonstrated in the context of sequential inspection games, where many mathematically challenging models have been studied. Finally, the important concept of leadership, where the inspector becomes a leader by announcing and committing himself to his strategy, is shown to apply naturally to inspection games.",
"Abstract : Many disarmament or arms-control agreements may be monitored by sample inspections. Unlike the usual sampling procedures, sampling for arms-control agreements must take into account the possibility that the statistical universe from which samples are to be drawn may be tampered with so as to decrease the probability of detection of a violation. A game-theoretic model is formulated for studying a sampling problem in which the inspector is allowed to examine a fixed number (usually small) of items or natural events (e.g., items from an assembly line under an agreement limiting military production, or seismic events under a nuclear test-ban agreement). It is assumed that the inspections are to be performed within a fixed time period or on a series of events of fixed length. Optimal sampling procedures are derived as functions of the number of inspections and the size of the statistical universe. Some variations on the model are briefly considered.",
"Abstract : An inspector's game is a non-constant sum 2person game in which one player has promised to perform a certain duty a5d the other player is allowed to occasionally inspect and verify that the duty has indeed been performed. A solution to a variant of such a game is given in this paper, based on the assumption that the inspector can announce his mixed strategy in advance, if he so wishes, whereas the other player, who has already given his promise, cannot threat by explicitly saying that he will not keep his word. (Author)"
]
} |
1610.04989 | 2950860539 | Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets. | Document-level sentiment classification is a sticky task in sentiment analysis @cite_34 , which is to infer the sentiment polarity or intensity of a whole document. The most challenging part is that not every part of the document is equally informative for inferring the sentiment of the whole document @cite_10 @cite_23 . Various methods have been investigated and explored over years @cite_16 @cite_34 @cite_26 @cite_23 @cite_0 . Most of these methods depend on traditional machine learning algorithms, and are in need of effective handcrafted features. | {
"cite_N": [
"@cite_26",
"@cite_16",
"@cite_0",
"@cite_23",
"@cite_34",
"@cite_10"
],
"mid": [
"40549020",
"2022204871",
"2012070465",
"",
"",
"2114524997"
],
"abstract": [
"Microblogging today has become a very popular communication tool among Internet users. Millions of users share opinions on different aspects of life everyday. Therefore microblogging web-sites are rich sources of data for opinion mining and sentiment analysis. Because microblogging has appeared relatively recently, there are a few research works that were devoted to this topic. In our paper, we focus on using Twitter, the most popular microblogging platform, for the task of sentiment analysis. We show how to automatically collect a corpus for sentiment analysis and opinion mining purposes. We perform linguistic analysis of the collected corpus and explain discovered phenomena. Using the corpus, we build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document. Experimental evaluations show that our proposed techniques are efficient and performs better than previously proposed methods. In our research, we worked with English, however, the proposed technique can be used with any other language.",
"This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.",
"Document-level sentiment classification aims to automate the task of classifying a textual review, which is given on a single topic, as expressing a positive or negative sentiment. In general, supervised methods consist of two stages: (i) extraction selection of informative features and (ii) classification of reviews by using learning models like Support Vector Machines (SVM) and Nai@?ve Bayes (NB). SVM have been extensively and successfully used as a sentiment learning approach while Artificial Neural Networks (ANN) have rarely been considered in comparative studies in the sentiment analysis literature. This paper presents an empirical comparison between SVM and ANN regarding document-level sentiment analysis. We discuss requirements, resulting models and contexts in which both approaches achieve better levels of classification accuracy. We adopt a standard evaluation context with popular supervised methods for feature selection and weighting in a traditional bag-of-words model. Except for some unbalanced data contexts, our experiments indicated that ANN produce superior or at least comparable results to SVM's. Specially on the benchmark dataset of Movies reviews, ANN outperformed SVM by a statistically significant difference, even on the context of unbalanced data. Our results have also confirmed some potential limitations of both models, which have been rarely discussed in the sentiment classification literature, like the computational cost of SVM at the running time and ANN at the training time.",
"",
"",
"Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \"thumbs up\" or \"thumbs down\". To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints."
]
} |
1610.04989 | 2950860539 | Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets. | Although it is widely accepted that LSTM has more long-lasting memory units than RNNs, it still suffers from forgetting'' information which is too far away from the current point @cite_29 @cite_6 . Such a scalability problem of LSTMs is crucial to extend some previous sentence-level work to document-level sentiment analysis. | {
"cite_N": [
"@cite_29",
"@cite_6"
],
"mid": [
"1800356822",
"1951216520"
],
"abstract": [
"Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to LSTM on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.",
"Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study."
]
} |
1610.04989 | 2950860539 | Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets. | Various models have been proposed to increase the ability of LSTMs to store long-range information @cite_29 @cite_7 and two kinds of approaches gain attraction. One is to augment LSTM with an external memory @cite_20 @cite_18 , but they are of poor performance on time because of the huge external memory matrix. Unlike these methods, we fully exploit the potential of internal memory of LSTM by adjusting its forgetting rates. | {
"cite_N": [
"@cite_29",
"@cite_20",
"@cite_18",
"@cite_7"
],
"mid": [
"1800356822",
"2951008357",
"",
"2295360187"
],
"abstract": [
"Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to LSTM on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.",
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"",
"The advantage of recurrent neural networks (RNNs) in learning dependencies between time-series data has distinguished RNNs from other deep learning models. Recently, many advances are proposed in this emerging field. However, there is a lack of comprehensive review on memory models in RNNs in the literature. This paper provides a fundamental review on RNNs and long short term memory (LSTM) model. Then, provides a surveys of recent advances in different memory enhancements and learning techniques for capturing long term dependencies in RNNs."
]
} |
1610.04989 | 2950860539 | Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets. | The other one tries to use multiple time-scales to distinguish different states @cite_8 @cite_32 @cite_13 . They partition the hidden states into several groups and each group is activated and updated at different frequencies (e.g. one group updates every 2 time-step, the other updates every 4 time-step). In these methods, different memory groups are not fully interconnected, and the information is transmitted from faster groups to slower ones, or vice versa. | {
"cite_N": [
"@cite_13",
"@cite_32",
"@cite_8"
],
"mid": [
"2251189452",
"",
"2099257174"
],
"abstract": [
"Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-termmemory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.",
"",
"We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | . Problems like ours are often addressed by learning a classifier from labeled training data, and then applying it to the unseen instances. Were the disambiguation response variable real-valued and not categorical then the problem would be one of regression, not classification @cite_29 . This is a popular approach, when (i) the training data is really large and diverse enough to avoid overfitting; and (ii) no insight is available other than the training data. Recall, however, that (i) D5 has only 62 examples (labeled queried terms). Having such low number is common since acquiring labeled data is expensive; and the interaction log for the queried terms population may form a long tail,' possibly making the real system log small termwise; also, (ii) D1-INTR is available with 60 So the IDR problem setting and how to make the best of its two sources is more challenging and does not suit standard supervised learning. That is why we have taken a more original approach for term classification. | {
"cite_N": [
"@cite_29"
],
"mid": [
"1503398984"
],
"abstract": [
"Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | . Many models combine forecasts from several human experts as independent information sources on uncertain events. They give their probability estimates, which would be compatible with relative frequencies obtained from multinomial counts like in a user interaction log. A popular approach to combine the individual forecasts is the so-called linear opinion pooling.' It assigns each forecast a weight that reflects the importance or quality of that expert. Recently, along these lines, more advanced techniques have been developed @cite_3 . One important point of improvement is to consider sharpness,' which rewards how close to either 0 or 1 an estimate is (ibid.), somehow in line with our choice to use entropy as a measure. Yet their core measure of source quality is still calibration' (see, e.g., Brier scores). http: en.wikipedia.org wiki Brier_score . Suppose an expert estimates 0.3 as the probability for a specific outcome, then she is best calibrated if that outcome is seen in 30 is dependent on seen examples. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2091855602"
],
"abstract": [
"This paper begins by presenting a simple model of the way in which experts estimate probabilities. The model is then used to construct a likelihood-based aggregation formula for combining multiple probability forecasts. The resulting aggregator has a simple analytical form that depends on a single, easily-interpretable parameter. This makes it computationally simple, attractive for further development, and robust against overfitting. Based on a large-scale dataset in which over 1300 experts tried to predict 69 geopolitical events, our aggregator is found to be superior to several widely-used aggregation algorithms."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | The Dempster-Shafer (DS) theory of evidence @cite_5 has been applied in related work to combine two DB-intrinsic scorings @cite_6 . It does not seem to be fit to the interaction log scenario. The inferential scheme (Dempster's rule of combination) derives shared belief from multiple sources and ignores all the conflicting (non-shared) belief by a normalization factor. Zadeh gives an example of how counter-intuitive this is when beliefs should be rather integrated cumulatively @cite_26 . Let doctors @math have beliefs for diagnosis on conditions ( @math ) meningitis, ( @math ) brain tumor, ( @math ) concussion, viz., @math and @math . Then @math is inferred @cite_26 . The doctors agree only that brain tumor is very unlikely, but such a weak consensus is pushed through anyway. Now if we think of @math as movies @math plan to see together then the inference finds the movie shared by their belief constraints. This shows that the matching between abstract framework and applied use case is really important. | {
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_6"
],
"mid": [
"186113575",
"2797148637",
"2085828533"
],
"abstract": [
"",
"Both in science and in practical affairs we reason by combining facts only inconclusively supported by evidence. Building on an abstract understanding of this process of combination, this book constructs a new theory of epistemic probability. The theory draws on the work of A. P. Dempster but diverges from Depster's viewpoint by identifying his \"lower probabilities\" as epistemic probabilities and taking his rule for combining \"upper and lower probabilities\" as fundamental. The book opens with a critique of the well-known Bayesian theory of epistemic probability. It then proceeds to develop an alternative to the additive set functions and the rule of conditioning of the Bayesian theory: set functions that need only be what Choquet called \"monotone of order of infinity.\" and Dempster's rule for combining such set functions. This rule, together with the idea of \"weights of evidence,\" leads to both an extensive new theory and a better understanding of the Bayesian theory. The book concludes with a brief treatment of statistical inference and a discussion of the limitations of epistemic probability. Appendices contain mathematical proofs, which are relatively elementary and seldom depend on mathematics more advanced that the binomial theorem.",
"We showcase QUEST (QUEry generator for STructured sources), a search engine for relational databases that combines semantic and machine learning techniques for transforming keyword queries into meaningful SQL queries. The search engine relies on two approaches: the forward, providing mappings of keywords into database terms (names of tables and attributes, and domains of attributes), and the backward, computing the paths joining the data structures identified in the forward step. The results provided by the two approaches are combined within a probabilistic framework based on the Dempster-Shafer Theory. We demonstrate QUEST capabilities, and we show how, thanks to the flexibility obtained by the probabilistic combination of different techniques, QUEST is able to compute high quality results even with few training data and or with hidden data sources such as those found in the Deep Web."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | DS's belief functions to combine reliable past data' and expert ( judgemental') evidence, e.g., as applied to sea level estimation subject to climate change @cite_16 , or as applied to forecasts of innovation diffusion @cite_15 , but these differ sufficiently to the IDR use case, where no data' is available and the judgemental' information falls into either a certain or an ambiguous case. | {
"cite_N": [
"@cite_15",
"@cite_16"
],
"mid": [
"2035573225",
"2089123806"
],
"abstract": [
"A method is proposed to quantify uncertainty on statistical forecasts using the formalism of belief functions. The approach is based on two steps. In the estimation step, a belief function on the parameter space is constructed from the normalized likelihood given the observed data. In the prediction step, the variable Y to be forecasted is written as a function of the parameter θ and an auxiliary random variable Z with known distribution not depending on the parameter, a model initially proposed by Dempster for statistical inference. Propagating beliefs about θ and Z through this model yields a predictive belief function on Y. The method is demonstrated on the problem of forecasting innovation diffusion using the Bass model, yielding a belief function on the number of adopters of an innovation in some future time period, based on past adoption data.",
"Estimation of extreme sea levels for high return periods is of prime importance in hydrological design and flood risk assessment. Common practice consists of inferring design levels from historical observations and assuming the distribution of extreme values to be stationary. However, in recent years, there has been a growing awareness of the necessity to integrate the effects of climate change in environmental analysis. In this paper, we present a methodology based on belief functions to combine statistical judgements with expert evidence in order to predict the future centennial sea level at a particular location, taking into account climate change. Likelihood-based belief functions derived from statistical observations are combined with random intervals encoding expert assessments of the 21st century sea level rise. Monte Carlo simulations allow us to compute belief and plausibility degrees for various hypotheses about the design parameter."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | . As mentioned in , we used crowdsourcing as a cost-effective means to simulate user interactions and get insight to system design beforehand @cite_11 . By two different crowd task designs, we have acquired explicit and implicit feedback and have studied them, also considering their possible limitations to feed a learning layer in IDR systems. Although we take some inspiration from Interactive Information Retrieval (IIR) @cite_8 , we avoid complex models (e.g., @cite_17 ) to pursue and report instead a learning technique designed to be simple and very fast at query time. Note also that a comparison with complex models for aggregation of crowd answers is not required, since a direct measuring of its uncertainty has been enough to warrant high P@1 accuracy under a neat bayesian smoothing model tuned with LOGIT. | {
"cite_N": [
"@cite_8",
"@cite_17",
"@cite_11"
],
"mid": [
"2099048704",
"1967498673",
"2089350162"
],
"abstract": [
"All search in the real-world is inherently interactive. Information retrieval (IR) has a firm tradition of using simulation to evaluate IR systems as embodied by the Cranfield paradigm. However, to a large extent, such system evaluations ignore user interaction. Simulations provide a way to go beyond this limitation. With an increasing number of researchers using simulation to evaluate interactive IR systems, it is now timely to discuss, develop and advance this powerful methodology within the field of IR. During the SimInt 2010 workshop around 40 participants discussed and presented their views on the simulation of interaction. The main conclusion and general consensus was that simulation offers great potential for the field of IR; and that simulations of user interaction can make explicit the user and the user interface while maintaining the advantages of the Cranfield paradigm.",
"Understanding how people interact when searching is central to the study of Interactive Information Retrieval (IIR). Most of the prior work has either been conceptual, observational or empirical. While this has led to numerous insights and findings regarding the interaction between users and systems, the theory has lagged behind. In this paper, we extend the recently proposed search economic theory to make the model more realistic. We then derive eight interaction based hypotheses regarding search behaviour. To validate the model, we explore whether the search behaviour of thirty-six participants from a lab based study is consistent with the theory. Our analysis shows that observed search behaviours are in line with predicted search behaviours and that it is possible to provide credible explanations for such behaviours. This work describes a concise and compact representation of search behaviour providing a strong theoretical basis for future IIR research.",
"In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users' information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | Regarding implicit feedback in particular, the concept of good abandonment' (GA) has been explored based on a large Google search log @cite_12 . It indicates that GA (i.e., when the user's information need is satisfied with no need to click on a result or refine the query) may be really a significant portion of abandoned sessions. Considering two modalities, PC and mobile, they point out that the latter has higher GA rates. As mobile queries tend to be more objective, the result snippets are often enough to satisfy an information need @cite_12 . This adds to our own findings suggesting high GA rates in the IDR use case, as a database query answer (structured list of facts) may be precise enough not to require any browsing. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2119074598"
],
"abstract": [
"Query abandonment by search engine users is generally considered to be a negative signal. In this paper, we explore the concept of good abandonment. We define a good abandonment as an abandoned query for which the user's information need was successfully addressed by the search results page, with no need to click on a result or refine the query. We present an analysis of abandoned internet search queries across two modalities (PC and mobile) in three locales. The goal is to approximate the prevalence of good abandonment, and to identify types of information needs that may lead to good abandonment, across different locales and modalities. Our study has three key findings: First, queries potentially indicating good abandonment make up a significant portion of all abandoned queries. Second, the good abandonment rate from mobile search is significantly higher than that from PC search, across all locales tested. Third, classified by type of information need, the major classes of good abandonment vary dramatically by both locale and modality. Our findings imply that it is a mistake to uniformly consider query abandonment as a negative signal. Further, there is a potential opportunity for search engines to drive additional good abandonment, especially for mobile search users, by improving search features and result snippets."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | . Some ad-hoc practices in the keyword search literature that have been criticized @cite_4 @cite_14 @cite_21 , and that we have strived not to incur in are: Existing scoring functions have become increasingly complex while the added value obscure @cite_21 . We have studied and shown in detail the added value of implicit feedback as a DB-extrinsic scoring to be combined by Bsmooth with any existing DB-intrinsic scoring. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_21"
],
"mid": [
"2143852486",
"60246258",
"1972545488"
],
"abstract": [
"Keyword search (KWS) over relational databases has recently received significant attention. Many solutions and many prototypes have been developed. This task requires addressing many issues, including robustness, accuracy, reliability, and privacy. An emerging issue, however, appears to be performance related: current KWS systems have unpredictable running times. In particular, for certain queries it takes too long to produce answers, and for others the system may even fail to return (e.g., after exhausting memory). In this paper we argue that as today's users have been \"spoiled\" by the performance of Internet search engines, KWS systems should return whatever answers they can produce quickly and then provide users with options for exploring any portion of the answer space not covered by these answers. Our basic idea is to produce answers that can be generated quickly as in today's KWS systems, then to show users query forms that characterize the unexplored portion of the answer space. Combining KWS systems with forms allows us to bypass the performance problems inherent to KWS without compromising query coverage. We provide a proof of concept for this proposed approach, and discuss the challenges encountered in building this hybrid system. Finally, we present experiments over real-world datasets to demonstrate the feasibility of the proposed solution.",
"The prevalence of free text search in web search engines has inspired recent interest in keyword search on relational databases. Whereas relational queries formally specify matching tuples, keyword queries are imprecise expressions of the user’s information need. The correctness of search results depends on the user’s subjective assessment. As a result, the empirical evaluation of a keyword retrieval system’s effectiveness is essential. In this paper, we examine the evolving practices and resources for effectiveness evaluation of keyword searches on relational databases. We compare practices with the longer-standing full-text evaluation methodologies in information retrieval. In the light of this comparison, we make some suggestions for the future development of the art in evaluating keyword search effectiveness.",
"Extending the keyword search paradigm to relational data has been an active area of research within the database and IR community during the past decade. Many approaches have been proposed, but despite numerous publications, there remains a severe lack of standardization for the evaluation of proposed search techniques. Lack of standardization has resulted in contradictory results from different evaluations, and the numerous discrepancies muddle what advantages are proffered by different approaches. In this paper, we present the most extensive empirical performance evaluation of relational keyword search techniques to appear to date in the literature. Our results indicate that many existing search techniques do not provide acceptable performance for realistic retrieval tasks. In particular, memory consumption precludes many search techniques from scaling beyond small data sets with tens of thousands of vertices. We also explore the relationship between execution time and factors varied in previous evaluations; our analysis indicates that most of these factors have relatively little impact on performance. In summary, our work confirms previous claims regarding the unacceptable performance of these search techniques and underscores the need for standardization in evaluations--standardization exemplified by the IR community."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | Related work reports evaluation based on ad-hoc queries and databases---even arbitrary modification of schema, e.g., changing table and attribute names to match user queries better @cite_21 . We evaluate Bsmooth on IMDb as is' in the benchmark with its own encoding. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1972545488"
],
"abstract": [
"Extending the keyword search paradigm to relational data has been an active area of research within the database and IR community during the past decade. Many approaches have been proposed, but despite numerous publications, there remains a severe lack of standardization for the evaluation of proposed search techniques. Lack of standardization has resulted in contradictory results from different evaluations, and the numerous discrepancies muddle what advantages are proffered by different approaches. In this paper, we present the most extensive empirical performance evaluation of relational keyword search techniques to appear to date in the literature. Our results indicate that many existing search techniques do not provide acceptable performance for realistic retrieval tasks. In particular, memory consumption precludes many search techniques from scaling beyond small data sets with tens of thousands of vertices. We also explore the relationship between execution time and factors varied in previous evaluations; our analysis indicates that most of these factors have relatively little impact on performance. In summary, our work confirms previous claims regarding the unacceptable performance of these search techniques and underscores the need for standardization in evaluations--standardization exemplified by the IR community."
]
} |
1610.04789 | 2532644347 | There is great interest in supporting imprecise queries over databases today. To support such queries, the system is typically required to disambiguate parts of the user-specified query against the database, using whatever resources are intrinsically available to it (the database schema, value distributions, natural language models etc). Often, systems will also have a user-interaction log available, which can supplement their model based on their own intrinsic resources. This leads to a problem of how best to combine the system's prior ranking with insight derived from the user-interaction log. Statistical inference techniques such as maximum likelihood or Bayesian updates from a subjective prior turn out not to apply in a straightforward way due to possible noise from user search behavior and to encoding biases endemic to the system's models. In this paper, we address such learning problems in interactive data retrieval, with specific focus on type classification for user-specified query terms. We develop a novel Bayesian smoothing algorithm, Bsmooth, which is simple, fast, flexible and accurate. We analytically establish some desirable properties and show, through experiments against an independent benchmark, that the addition of such a learning layer performs much better than standard methods. | As mentioned, most related techniques rely on large DB-induced graphs and or on auxiliary views for keywords, incurring in serious performance and maintenance issues. They may not apply to online databases on-the-fly @cite_14 . Bsmooth in turn can be built at query time on top of any intrinsic model without noticeable time expense added. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2143852486"
],
"abstract": [
"Keyword search (KWS) over relational databases has recently received significant attention. Many solutions and many prototypes have been developed. This task requires addressing many issues, including robustness, accuracy, reliability, and privacy. An emerging issue, however, appears to be performance related: current KWS systems have unpredictable running times. In particular, for certain queries it takes too long to produce answers, and for others the system may even fail to return (e.g., after exhausting memory). In this paper we argue that as today's users have been \"spoiled\" by the performance of Internet search engines, KWS systems should return whatever answers they can produce quickly and then provide users with options for exploring any portion of the answer space not covered by these answers. Our basic idea is to produce answers that can be generated quickly as in today's KWS systems, then to show users query forms that characterize the unexplored portion of the answer space. Combining KWS systems with forms allows us to bypass the performance problems inherent to KWS without compromising query coverage. We provide a proof of concept for this proposed approach, and discuss the challenges encountered in building this hybrid system. Finally, we present experiments over real-world datasets to demonstrate the feasibility of the proposed solution."
]
} |
1610.04794 | 2950803263 | Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach. | Given a set of data samples @math where @math , the task of clustering is to group the @math data samples into @math categories. Arguably, K-means @cite_28 is the most widely adopted algorithm. K-means approaches this task by optimizing the following cost function: where @math is the assignment vector of data point @math which has only one non-zero element, @math denotes the @math th element of @math , and the @math th column of @math , i.e., @math , denotes the centroid of the @math th cluster. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2150593711"
],
"abstract": [
"It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes."
]
} |
1610.04794 | 2950803263 | Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach. | K-means works well when the data samples are evenly scattered around their centroids in the feature space; we consider datasets which have this structure as being K-means-friendly' (cf. top-left subfigure of Fig. ). However, high-dimensional data are in general not very K-means-friendly. In practice, using a DR pre-processing, e.g., PCA or NMF @cite_2 @cite_4 , to reduce the dimension of @math to a much lower dimensional space and then apply K-means usually gives better results. In addition to the above classic DR methods that essentially learn a linear generative model from the latent space to the data domain, nonlinear DR approaches such as those used in spectral clustering @cite_31 @cite_34 and DNN-based DR @cite_23 @cite_21 @cite_35 are also widely used as pre-processing before K-means or other clustering algorithms, see also @cite_39 @cite_5 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_21",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_34"
],
"mid": [
"",
"",
"2050968963",
"2145094598",
"2100495367",
"",
"2072072671",
"2165874743",
""
],
"abstract": [
"",
"",
"The block coordinate descent (BCD) method is widely used for minimizing a continuous function @math of several block variables. At each iteration of this method, a single block of variables is optimized, while the remaining variables are held fixed. To ensure the convergence of the BCD method, the subproblem of each block variable needs to be solved to its unique global optimal. Unfortunately, this requirement is often too restrictive for many practical scenarios. In this paper, we study an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of @math which are either locally tight upper bounds of @math or strictly convex local approximations of @math . The main contributions of this work include the characterizations of the convergence conditions for a fairly wide class of such methods, especially for the cases where the objective functions are either nondifferentiable or nonconvex. Our results unify and extend the existing convergence results ...",
"We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"",
"A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.",
""
]
} |
1610.04794 | 2950803263 | Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach. | Instead of using DR as a pre-processing, joint DR and clustering was also considered in the literature @cite_32 @cite_1 @cite_30 . This line of work can be summarized as follows. Consider the generative model where a data sample is generated by @math , where @math and @math , where @math . Assume that the data clusters are well-separated in latent domain (i.e., where @math lives) but distorted by the transformation introduced by @math . Reference @cite_40 formulated the joint optimization problem as follows: where @math , @math , and @math is a parameter for balancing data fidelity and the latent cluster structure. In , the first term performs DR and the second term performs latent clustering. The terms @math and @math are regularizations (e.g., nonnegativity or sparsity) to prevent trivial solutions, e.g., @math ; see details in @cite_40 . | {
"cite_N": [
"@cite_30",
"@cite_40",
"@cite_1",
"@cite_32"
],
"mid": [
"",
"2407044469",
"2160616617",
"2212737779"
],
"abstract": [
"",
"Dimensionality reduction techniques play an essential role in data analytics, signal processing, and machine learning. Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification. Finding reduced-dimension representations that are well-suited for the intended task is more appealing. This paper proposes a joint factor analysis and latent clustering framework, which aims at learning cluster-aware low-dimensional representations of matrix and tensor data. The proposed approach leverages matrix and tensor factorization models that produce essentially unique latent representations of the data to unravel latent cluster structure—which is otherwise obscured because of the freedom to apply an oblique transformation in latent space. At the same time, latent cluster structure is used as prior information to enhance the performance of factorization. Specific contributions include several custom-built problem formulations, corresponding algorithms, and discussion of associated convergence properties. Besides extensive simulations, real-world datasets such as Reuters document data and MNIST image data are also employed to showcase the effectiveness of the proposed approaches.",
"We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-the-art subspace clustering methods.",
"A procedure is developed for clustering objects in a low-dimensional subspace of the column space of an objects by variables data matrix. The method is based on the K-means criterion and seeks the subspace that is maximally informative about the clustering structure in the data. In this low-dimensional representation, the objects, the variables and the cluster centroids are displayed jointly. The advantages of the new method are discussed, an efficient alternating least-squares algorithm is described, and the procedure is illustrated on some artificial data."
]
} |
1610.04794 | 2950803263 | Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach. | The data model @math in the above line of work may be oversimplified: The data generating process can be much more complex than this linear transform. Therefore, it is well justified to seek powerful non-linear transforms, e.g. DNNs, to model this data generating process, while at the same time make use of the joint DR and clustering idea. Two recent works, @cite_12 and @cite_24 , made such attempts. | {
"cite_N": [
"@cite_24",
"@cite_12"
],
"mid": [
"2337374958",
"2173649752"
],
"abstract": [
"In this paper, we propose a recurrent framework for Joint Unsupervised LEarning (JULE) of deep representations and image clusters. In our framework, successive operations in a clustering algorithm are expressed as steps in a recurrent process, stacked on top of representations output by a Convolutional Neural Network (CNN). During training, image clusters and representations are updated jointly: image clustering is conducted in the forward pass, while representation learning in the backward pass. Our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning. By integrating two processes into a single model with a unified weighted triplet loss and optimizing it end-to-end, we can obtain not only more powerful representations, but also more precise image clusters. Extensive experiments show that our method outperforms the state-of-the-art on image clustering across a variety of image datasets. Moreover, the learned representations generalize well when transferred to other tasks.",
"Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods."
]
} |
1610.04936 | 2536457637 | Geometric model fitting is a fundamental task in computer graphics and computer vision. However, most geometric model fitting methods are unable to fit an arbitrary geometric model (e.g. a surface with holes) to incomplete data, due to that the similarity metrics used in these methods are unable to measure the rigid partial similarity between arbitrary models. This paper hence proposes a novel rigid geometric similarity metric, which is able to measure both the full similarity and the partial similarity between arbitrary geometric models. The proposed metric enables us to perform partial procedural geometric model fitting (PPGMF). The task of PPGMF is to search a procedural geometric model space for the model rigidly similar to a query of non-complete point set. Models in the procedural model space are generated according to a set of parametric modeling rules. A typical query is a point cloud. PPGMF is very useful as it can be used to fit arbitrary geometric models to non-complete (incomplete, over-complete or hybrid-complete) point cloud data. For example, most laser scanning data is non-complete due to occlusion. Our PPGMF method uses Markov chain Monte Carlo technique to optimize the proposed similarity metric over the model space. To accelerate the optimization process, the method also employs a novel coarse-to-fine model dividing strategy to reject dissimilar models in advance. Our method has been demonstrated on a variety of geometric models and non-complete data. Experimental results show that the PPGMF method based on the proposed metric is able to fit non-complete data, while the method based on other metrics is unable. It is also shown that our method can be accelerated by several times via early rejection. | Most GIPM methods take either a particular type of geometric model or geometric data as input. BGMF methods such as @cite_28 @cite_34 work on basic geometric models. @cite_15 @cite_25 rely on image information to achieve IPM while our work does not rely on images. @cite_29 assumes the number of model parameters is fixed. @cite_0 takes symmetry as an assumption. @cite_19 is limited to facade point clouds and split grammar. @cite_14 takes Manhattan-World as an assumption. @cite_3 is limited to constrained attribute grammar. @cite_1 @cite_38 @cite_31 work well on airborne laser scanning data, however, it is hard to extend them to other types of data. @cite_27 works on tree models. @cite_9 relys on semi-automatic segmentation operations. Our method is full-automatic and makes no assumption about the type of input geometric model and geometric data. Consequently, similar to @cite_20 @cite_35 , our method can be used for general-purpose GIPM. | {
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_14",
"@cite_35",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_34",
"@cite_25",
"@cite_20"
],
"mid": [
"2134148744",
"1996086983",
"1976952084",
"2093211258",
"2085261163",
"",
"2210386111",
"2093664813",
"2117029985",
"2117878059",
"2003548021",
"1955601057",
"1971719398",
"",
"2155296376",
"2103815986"
],
"abstract": [
"We present a new approach for building reconstruction from a single Digital Surface Model (DSM). It treats buildings as an assemblage of simple urban structures extracted from a library of 3D parametric blocks (like a LEGO set). First, the 2D-supports of the urban structures are extracted either interactively or automatically. Then, 3D-blocks are placed on the 2D-supports using a Gibbs model which controls both the block assemblage and the fitting to data. A Bayesian decision finds the optimal configuration of 3D--blocks using a Markov Chain Monte Carlo sampler associated with original proposition kernels. This method has been validated on multiple data set in a wide-resolution interval such as 0.7 m satellite and 0.1 m aerial DSMs, and provides 3D representations on complex buildings and dense urban areas with various levels of detail.",
"Abstract This paper presents a generative statistical approach to automatic 3D building roof reconstruction from airborne laser scanning point clouds. In previous works, bottom-up methods, e.g., points clustering, plane detection, and contour extraction, are widely used. Due to the data artefacts caused by tree clutter, reflection from windows, water features, etc., the bottom-up reconstruction in urban areas may suffer from a number of incomplete or irregular roof parts. Manually given geometric constraints are usually needed to ensure plausible results. In this work we propose an automatic process with emphasis on top-down approaches. The input point cloud is firstly pre-segmented into subzones containing a limited number of buildings to reduce the computational complexity for large urban scenes. For the building extraction and reconstruction in the subzones we propose a pure top-down statistical scheme, in which the bottom-up efforts or additional data like building footprints are no more required. Based on a predefined primitive library we conduct a generative modeling to reconstruct roof models that fit the data. Primitives are assembled into an entire roof with given rules of combination and merging. Overlaps of primitives are allowed in the assembly. The selection of roof primitives, as well as the sampling of their parameters, is driven by a variant of Markov Chain Monte Carlo technique with specified jump mechanism. Experiments are performed on data-sets of different building types (from simple houses, high-rise buildings to combined building groups) and resolutions. The results show robustness despite the data artefacts mentioned above and plausibility in reconstruction.",
"We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.",
"We present a method for controlling the output of procedural modeling programs using Sequential Monte Carlo (SMC). Previous probabilistic methods for controlling procedural models use Markov Chain Monte Carlo (MCMC), which receives control feedback only for completely-generated models. In contrast, SMC receives feedback incrementally on incomplete models, allowing it to reallocate computational resources and converge quickly. To handle the many possible sequentializations of a structured, recursive procedural modeling program, we develop and prove the correctness of a new SMC variant, Stochastically-Ordered Sequential Monte Carlo (SOSMC). We implement SOSMC for general-purpose programs using a new programming primitive: the stochastic future. Finally, we show that SOSMC reliably generates high-quality outputs for a variety of programs and control scoring functions. For small computational budgets, SOSMC's outputs often score nearly twice as high as those of MCMC or normal SMC.",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing",
"",
"Thanks to the recent advances in computational photography and remote sensing, point clouds of buildings are becoming increasingly available, yet their processing poses various challenges. In our work, we tackle the problem of point cloud completion and editing and we approach it via inverse procedural modeling. Contrary to the previous work, our approach operates directly on the point cloud without an intermediate triangulation. Our approach consists of 1) semi-automatic segmentation of the input point cloud with segment comparison and template matching to detect repeating structures, 2) a consensus-based voting schema and a pattern extraction algorithm to discover completed terminal geometry and their patterns of usage, all encoded into a context-free grammar, and 3) an interactive editing tool where the user can create new point clouds by using procedural copy and paste operations, and smart resizing. We demonstrate our approach on editing of building models with up to 1.8M points. In our implementation, preprocessing takes up to several minutes and a single editing operation needs from one second to one minute depending on the model size and the operation type.",
"We present a method for detecting and parsing buildings from unorganized 3D point clouds into a compact, hierarchical representation that is useful for high-level tasks. The input is a set of range measurements that cover large-scale urban environment. The desired output is a set of parse trees, such that each tree represents a semantic decomposition of a building – the nodes are roof surfaces as well as volumetric parts inferred from the observable surfaces. We model the above problem using a simple and generic grammar and use an efficient dependency parsing algorithm to generate the desired semantic description. We show how to learn the parameters of this simple grammar in order to produce correct parses of complex structures. We are able to apply our model on large point clouds and parse an entire city.",
"We propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations --- in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data.",
"In this paper, we address the problem of inverse procedural modeling: Given a piece of exemplar 3D geometry, we would like to find a set of rules that describe objects that are similar to the exemplar. We consider local similarity, i.e., each local neighborhood of the newly created object must match some local neighborhood of the exemplar. We show that we can find explicit shape modification rules that guarantee strict local similarity by looking at the structure of the partial symmetries of the object. By cutting the object into pieces along curves within symmetric areas, we can build shape operations that maintain local similarity by construction. We systematically collect such editing operations and analyze their dependency to build a shape grammar. We discuss how to extract general rewriting systems, context free hierarchical rules, and grid-based rules. All of this information is derived directly from the model, without user interaction. The extracted rules are then used to implement tools for semi-automatic shape modeling by example, which are demonstrated on a number of different example data sets. Overall, our paper provides a concise theoretical and practical framework for inverse procedural modeling of 3D objects.",
"Recent advances in scanning technologies allow large-scale scanning of urban scenes. Commonly, such acquisition incurs imperfections: large regions are missing, significant variation in sampling density, noise and outliers. Nevertheless, building facades often consist structural patterns and self-similarities of local geometric structures. Their highly structured nature, makes 3D facades amenable to model-based approaches and in particular to grammatical representations. We present an algorithm for reconstruction of 3D polygonal models from scanned urban facades. We cast the problem of 3D facade segmentation as an optimization problem of a sequence of derivation rules with respect to a given grammar. The key idea is to segment scanned facades using a set of specific grammar rules and a dictionary of basic shapes that regularize the problem space while still offering a flexible model. We utilize this segmentation for computing a consistent polygonal representation from extrusions. Our algorithm is evaluated on a set of complex scanned facades that demonstrate the (plausible) reconstruction.",
"Procedural tree models have been popular in computer graphics for their ability to generate a variety of output trees from a set of input parameters and to simulate plant interaction with the environment for a realistic placement of trees in virtual scenes. However, defining such models and their parameters is a difficult task. We propose an inverse modelling approach for stochastic trees that takes polygonal tree models as input and estimates the parameters of a procedural model so that it produces trees similar to the input. Our framework is based on a novel parametric model for tree generation and uses Monte Carlo Markov Chains to find the optimal set of parameters. We demonstrate our approach on a variety of input models obtained from different sources, such as interactive modelling systems, reconstructed scans of real trees and developmental models.",
"We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD).",
"",
"We propose a novel grammar-driven approach for reconstruction of buildings and landmarks. Our approach complements Structure-from-Motion and image-based analysis with a 'inverse' procedural modeling strategy. So far, procedural modeling has mostly been used for creation of virtual buildings, while the inverse approaches typically focus on reconstruction of single facades. In our work, we reconstruct complete buildings as procedural models using template shape grammars. In the reconstruction process, we let the grammar interpreter automatically decide on which step to take next. The process can be seen as instantiating the template by determining the correct grammar parameters. As an example, we have chosen the reconstruction of Greek Doric temples. This process significantly differs from single facade segmentation due to the immediate need for 3D reconstruction.",
"Procedural representations provide powerful means for generating complex geometric structures. They are also notoriously difficult to control. In this article, we present an algorithm for controlling grammar-based procedural models. Given a grammar and a high-level specification of the desired production, the algorithm computes a production from the grammar that conforms to the specification. This production is generated by optimizing over the space of possible productions from the grammar. The algorithm supports specifications of many forms, including geometric shapes and analytical objectives. We demonstrate the algorithm on procedural models of trees, cities, buildings, and Mondrian paintings."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.