aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
Heckman's bivariate two-stage model @cite_22 @cite_10 has been used in different reject inference studies The Heckman's model, named after Nobel Laureate James Joseph Heckman, has been extended or modified in different directions. See @cite_15 for a chronological overview of the model evolution and its early applications. It was in @cite_23 where the Heckman's approach was first applied to credit scoring where the outcome is discrete. . This approach simultaneously models the accept reject and default non-default mechanisms. Assuming that the error terms in these processes are bivariate normally distributed with unit variance and correlation coefficient @math , the selection bias arises when @math and it is corrected using the inverse of the Mills ratio.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_22", "@cite_23" ], "mid": [ "2271858034", "2139122730", "1592576206", "2029283360" ], "abstract": [ "", "Sample selection bias as a specification error This paper discusses the bias that results from using non-randomly selected samples to estimate behavioral relationships as an ordinary specification error or «omitted variables» bias. A simple consistent two stage estimator is considered that enables analysts to utilize simple regression methods to estimate behavioral functions by least squares methods. The asymptotic distribution of the estimator is derived.(This abstract was borrowed from another version of this item.)", "The novel electric heating element is particularly useful as a water heating element, although its use is not so limited. The element comprises the usual met al-sheathed heater having electrical termination at one end of the sheath. The customary water heating element has a sheath of hair-pin formation, with the ends of the two legs of the sheath connected to structure which serves as a mounting member and an electrical connector. Such structure in the present application is of molded plastic with a met al insert.", "Abstract Most credit assessment models used in practice are based on simple credit scoring functions estimated by discriminant analysis. These functions are designed to distinguish whether or not applicants belong to the population of ‘would be’ defaulters. We suggest that the traditional view that emphasizes default probability is too narrow. Our model of credit assessment focuses on expected earnings. We demonstrate how maximum likelihood estimates of default probabilities can be obtained from a bivariate ‘censored probit’ framework using a ‘choice-based’ sample originally intended for discriminant analysis. The paper concludes with recommendations for combining these default probability estimates with other parameters of the loan earnings process to obtain a more meaningful model of credit assessment." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
Despite the popularity of Heckman's model, it is unclear whether this model can correct the selection bias or improve model performance. Some studies claim either higher model performance or different model parameters after using Heckman's model @cite_28 @cite_0 @cite_43 @cite_6 @cite_33 . These results, as explained by @cite_15 , depend upon whether the selection and default equations are correlated. On the other hand, @cite_47 @cite_11 @cite_45 state that the model parameters are inefficient, and the main criticism is that the Heckman's model fails to correct the selection bias when it is strong. This happens either when the correlation between the error terms in the selection and outcome equations is high or the data has high degree of censoring @cite_47 .
{ "cite_N": [ "@cite_47", "@cite_33", "@cite_28", "@cite_6", "@cite_0", "@cite_43", "@cite_45", "@cite_15", "@cite_11" ], "mid": [ "2031811990", "2034266378", "2119151205", "2023805477", "2043324736", "2020821583", "2082669783", "2271858034", "2044479146" ], "abstract": [ "This paper gives a short overview of Monte Carlo studies on the usefulness of Heckman's (1976, 1979) two-step estimator for estimating selection models. Such models occur frequently in empirical work, especially in microeconometrics when estimating wage equations or consumer expenditures. It is shown that exploratory work to check for collinearity problems is strongly recommended before deciding on which estimator to apply. In the absence of collinearity problems, the full-information maximum likelihood estimator is preferable to the limited-information two-step method of Heckman, although the latter also gives reasonable results. If, however, collinearity problems prevail, subsample OLS (or the Two-Part Model) is the most robust amongst the simple-to-calculate estimators. Copyright 2000 by Blackwell Publishers Ltd", "This paper investigates the effect of including the customer loan approval process to the estimation of loan performance and explores the influence of sample selection bias in predicting the probability of default. The bootstrap variable reduction technique is applied to reduce the variable dimension for a large data-set drawn from a major UK retail bank. The results show a statistically significant correlation between the loan approval and performance processes. We further demonstrate an economically significant improvement in forecasting performance when taking into account sample selection bias. We conclude that financial institutions can obtain benefits by correcting for sample selection bias in their credit scoring models.", "We examine three models for sample selection that are relevant for modeling credit scoring by commercial banks. A binary choice model is used to examine the decision of whether or not to extend credit. The selectivity aspect enters because such models are based on samples of individuals to whom credit has already been given. A regression model with sample selection is suggested for predicting expenditures, or the amount of credit. The same considerations as in the binary choice case apply here. Finally, a model for counts of occurrences is described which could, in some settings also be treated as a model of sample selection. # 1998 Elsevier Science B.V.", "AbstractTechnology evaluation has become a critical part of technology investment, and accurate evaluation can lead more funds to the companies that have innovative technology. However, existing processes have a weakness in that it considers only accepted applicants at the application stage. We analyse the effectiveness of technology evaluation model that encompasses both accepted and rejected applicants and compare its performance with the original accept-only model. Also, we include the analysis of reject inference technique, bivariate probit model, in order to see if the reject inference technique is of use against the accept-only model. The results show that sample selection bias of the accept-only model exists and the reject inference technique improves the accept-only model. However, the reject inference technique does not completely resolve the problem of sample selection bias.", "One of the aims of credit scoring models is to predict the probability of repayment of any applicant and yet such models are usually parameterised using a sample of accepted applicants only. This may lead to biased estimates of the parameters. In this paper we examine two issues. First, we compare the classification accuracy of a model based only on accepted applicants, relative to one based on a sample of all applicants. We find only a minimal difference, given the cutoff scores for the old model used by the data supplier. Using a simulated model we examine the predictive performance of models estimated from bands of applicants, ranked by predicted creditworthiness. We find that the lower the risk band of the training sample, the less accurate the predictions for all applicants. We also find that the lower the risk band of the training sample, the greater the overestimate of the true performance of the model, when tested on a sample of applicants within the same risk band — as a financial institution would do. The overestimation may be very large. Second, we examine the predictive accuracy of a bivariate probit model with selection (BVP). This parameterises the accept–reject model allowing for (unknown) omitted variables to be correlated with those of the original good–bad model. The BVP model may improve accuracy if the loan officer has overridden a scoring rule. We find that a small improvement when using the BVP model is sometimes possible.", "Many researchers see the need for reject inference in credit scoring models to come from a sample selection problem whereby a missing variable results in omitted variable bias. Alternatively, practitioners often see the problem as one of missing data where the relationship in the new model is biased because the behaviour of the omitted cases differs from that of those who make up the sample for a new model. To attempt to correct for this, differential weights are applied to the new cases. The aim of this paper is to see if the use of both a Heckman style sample selection model and the use of sampling weights, together, will improve predictive performance compared with either technique used alone. This paper will use a sample of applicants in which virtually every applicant was accepted. This allows us to compare the actual performance of each model with the performance of models which are based only on accepted cases.", "Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions.", "", "Abstract In many situations one needs to know which action one should take with a customer to yield the greatest response. Typically, estimates of the response functions of different actions will be based on the responses of customers previously assigned to each action. Often, however, the previous assignments will not have been random, so that estimates of the response functions will be biased. We examine the case of two possible actions. We look at the error arising from using the simple OLS estimate ignoring the selection bias, and also explore the possibility of using the Heckman model to allow for the sample selectivity. The performance of Heckman’s model is then compared with the simple OLS through simulation." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
A comparison of different reject inference methods, e.g. augmentation, parceling, fuzzy parceling and the Heckman's model, is presented in @cite_26 . The parceling and fuzzy parceling methods are very similar. They first fit a logistic regression model using the accepted applications. Then they use this model to estimate the default probability for all rejected applications. The difference is that the parceling method chooses a threshold on the default probability to assign the unknown outcome @math to the rejected applications. On the other hand, the fuzzy parceling method assumes that each reject application has both outcomes @math and @math , with weights given by the fitted model using only the accepted applications. Finally, the parcelling (fuzzy parceling) method fits a new (weighted) logistic regression using both accepted and rejected applications. The results in @cite_26 do not show higher model performance using the reject inference methods. However, the parameter estimates are different when applying the augmentation and parceling approaches. Hence, reject inference has a statistical and economic impact on the final model in this case.
{ "cite_N": [ "@cite_26" ], "mid": [ "2298695453" ], "abstract": [ "Credit scoring models are commonly developed using only accepted Known Good Bad (G B) applications, called KGB model, because we only know the performance of those accepted in the past. Obviously, the KGB model is not indicative of the entire through-the-door population, and reject inference precisely attempts to address the bias by assigning an inferred G B status to rejected applications. In this paper, we discuss the pros and cons of various reject inference techniques, and pitfalls to avoid when using them. We consider a real dataset of a major French consumer finance bank to assess the effectiveness of the practice of using reject inference. To do that, we rely on the logistic regression framework to model probabilities to become good bad, and then validate the model performance with and without sample selection bias correction. Our main results can be summarized as follows. First, we show that the best reject inference technique is not necessarily the most complicated one: reweighting and parceling provide more accurate and relevant results than fuzzy augmentation and Heckmans two-stage correction. Second, disregarding rejected applications significantly impacts the forecast accuracy of the scorecard. Third, as the sum of standard errors dramatically reduces when the sample size increases, reject inference turns out to produce an improved representation of the population. Finally, reject inference appears to be an effective way to reduce overfitting in model selection." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
Support vector machines are used in @cite_19 to extend the self-training (SL) algorithm, by adding the hypothesis that the rejected applications are riskier The self-training algorithm is an iterative approach where highly confident predictions about the unlabeled data are added to retrain the model. This procedure is repeated as many times as the user specify it. The main criticism of this method is that it can strengthen poor predictions @cite_13 . . Specifically, their approach iteratively adds rejected applications with higher confidence, i.e. vectors far from the decision-hyperplane, to retrain a SVM (just as in the SL algorithm). However, vectors close to the hyperplane are penalized since the uncertainty about their true label is higher. Their proposed iterative approach shows superior performance compared to other reject inference configurations using SVMs, including semi-supervised support vector machines (S3VM). In addition to higher performance, the iterative procedure in @cite_19 is faster than the S3VM.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2161820152", "2949416428" ], "abstract": [ "This paper presents a novel semi-supervised approach that determines a linear predictor using Support Vector Machines (SVMs) and incorporates information on rejected loans, assuming that the labeled data (accepted applicants) and unlabeled data (rejected applicants) are not drawn from the same distribution. We use a self-training algorithm in order to predict how likely a rejected applicant would have repaid had the applicant received credit. A modification to the self-training algorithm based on Platt's probabilistic output for SVMs is introduced. Experiments with two toy data sets; one well-known benchmark Credit Scoring data set, and one project performed for a Chilean financial institution demonstrate that our approach accomplishes the best classification performance compared to well-known reject inference alternatives and another state-of-the-art semi-supervised method for SVMs (Transductive SVM).", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
The S3VM model is used in @cite_12 for reject inference in credit scoring The model used in @cite_12 , originally developed by @cite_7 , uses a branch-and-bound approach to solve the mixed integer constrained quadratic programming problem faced in semi-supervised SVMs. This approach reduces the training time making it suitable for large-sized problems. using the accepted and rejected applications to fit an optimal hyperplane with maximum margin. The hyperplane traverses trough non-density regions of rejected applications and, at the same time, separates the accepted applications. Their results show higher performance compared to the logit and supervised support vector machine models. In Section , we show that S3VM does not scale to large credit scoring data sets and that our proposed models are able to use, at least, 16 times more data compared to S3VM.
{ "cite_N": [ "@cite_7", "@cite_12" ], "mid": [ "2288595346", "2296034778" ], "abstract": [ "This paper develops a branch-and-bound algorithm to solve the 2-norm soft margin semi-supervised support vector machine. First, the original problem is reformulated as a non-convex quadratically constrained quadratic programming problem with a simple structure. Then, we propose a new lower bound estimator which is conceptually simple and easy to be implemented in the branch-and-bound scheme. Since this estimator preserves both a high efficiency and a relatively good quality in the convex relaxation, it leads to a high total efficiency in the whole computational process. The numerical tests on both artificial and real-world data sets demonstrate the better effectiveness and efficiency of this proposed approach, which is compared to other well-known methods on different semi-supervised support vector machine models.", "Semi-supervised Support Vector Machines for reject inference are proposed.The method uses information of both the accepted and rejected applicants.The method deals with labelled and unlabelled classes of the outcome.The model is tested on real consumer loans with a low acceptance rate.Predictive accuracy is improved by the new model compared to traditional methods." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
In @cite_42 Gaussian mixture models (GMM) are used for density estimation of the default probability. The idea is that each component in the mixture density models a class-conditional distribution. Then, the model parameters are estimated using the expectation-maximization (EM) algorithm, which can estimate the parameters even when the class labels for the rejected applications are missing. The EM algorithm is also used for reject inference in @cite_8 . Both papers report high model performance. However, the results in @cite_42 are based on artificial data and @cite_8 only judge performance based on the Confusion matrix. Finally, the major limitation of the EM algorithm is that we need to be able to estimate the expectation over the latent variables. We show in Section that deep generative models circumvent this restriction by approximation.
{ "cite_N": [ "@cite_42", "@cite_8" ], "mid": [ "2005213340", "2055625998" ], "abstract": [ "Reject inference is the process of estimating the risk of defaulting for loan applicants that are rejected under the current acceptance policy. We propose a new reject inference method based on mixture modeling, that allows the meaningful inclusion of the rejects in the estimation process. We describe how such a model can be estimated using the EM algorithm. An experimental study shows that inclusion of the rejects can lead to a substantial improvement of the resulting classification rule. Copyright © 1999 John Wiley & Sons, Ltd.", "Reject inference is one of the key processes required to build relevant credit scorecard models. Reject inference is used to infer the good or bad loan status to credit applicants that were rejected by the financial institution. If rejected applicants data is not used in the updating of the credit scoring model, the model is biased because it will not be representative of the entire applicant population. Many reject inference techniques perform an extrapolation method to infer the good or bad loan status of the rejected applicants. The issues with extrapolation are discussed, and this study provides a novel reject inference technique in which the rejected applicants are included in the model estimation process. The extrapolation problem is avoided using the methodology in this paper. The newly proposed reject inference technique is shown to outperform the standard extrapolation technique using a simulation study." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
A Bayesian approach for reject inference is presented in @cite_45 . In this method the default probability is inferred from the missing data mechanism. The authors use the bound-collapse approach This model is originally presented in Sebastiani and Ramoni (2000) "Bayesian inference with missing data using bound and collapse". to estimate the posterior distribution over the score and class label, which is assumed to have a Dirichlet distribution as well as the marginal distribution of the missing class label. The reason for using the bound-collapse method is to avoid exhaustive numerical procedures, like the Gibbs Sampling, to estimate the posterior distributions in this model. Their results show that the Bayesian bound-collapse method perform better than the augmentation and Heckman's model.
{ "cite_N": [ "@cite_45" ], "mid": [ "2082669783" ], "abstract": [ "Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions." ] }
1904.11376
2941193495
Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.
In this research we propose a novel Bayesian inference approach for reject inference in credit scoring, which uses Gaussian mixture models and differs from @cite_45 @cite_42 in that our models are based on variational inference, neural networks, and stochastic gradient optimization. The main advantages of our proposed method are that (i) inference of the rejected applications is based on an approximation of the posterior distribution and on the exact enumeration of the two possible outcomes that the rejected applications could have taken, (ii) the models use a latent representation of the customers' data, which contain powerful information, and (iii) deep generative models scale to large data sets.
{ "cite_N": [ "@cite_45", "@cite_42" ], "mid": [ "2082669783", "2005213340" ], "abstract": [ "Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions.", "Reject inference is the process of estimating the risk of defaulting for loan applicants that are rejected under the current acceptance policy. We propose a new reject inference method based on mixture modeling, that allows the meaningful inclusion of the rejects in the estimation process. We describe how such a model can be estimated using the EM algorithm. An experimental study shows that inclusion of the rejects can lead to a substantial improvement of the resulting classification rule. Copyright © 1999 John Wiley & Sons, Ltd." ] }
1904.11042
2940513107
We present a system for generating inconspicuous-looking textures that, when displayed in the physical world as digital or printed posters, cause visual object tracking systems to become confused. For instance, as a target being tracked by a robot's camera moves in front of such a poster, our generated texture makes the tracker lock onto it and allows the target to evade. This work aims to fool seldom-targeted regression tasks, and in particular compares diverse optimization strategies: non-targeted, targeted, and a new family of guided adversarial losses. While we use the Expectation Over Transformation (EOT) algorithm to generate physical adversaries that fool tracking models when imaged under diverse conditions, we compare the impacts of different conditioning variables, including viewpoint, lighting, and appearances, to find practical attack setups with high resulting adversarial strength and convergence speed. We further showcase textures optimized solely using simulated scenes can confuse real-world tracking systems.
In the vision domain, adversarial attacks have mostly been applied in classification, segmentation, and detection tasks. Early adversarial attack methods, such as L-BFGS @cite_20 , FGSM @cite_21 , JSMA @cite_24 , and C &W @cite_23 attacks often compute the gradient of an adversarial objective with respect to pixel inputs, in order to perturb a specific source input into an adversarial imitation. Moosavi-Dezfooli @cite_11 introduced an attack for creating a Universal Adversarial Perturbation (UAP) that can be applied onto many distinct source images to make them adversarial. While these methods can generate potent adversaries by digitally perturbing specific pixel values, they generally lose effectiveness when the adversary is imaged as a real-world photo.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_23", "@cite_20", "@cite_11" ], "mid": [ "2963207607", "2180612164", "2963857521", "2964153729", "2543927648" ], "abstract": [ "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.", "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images." ] }
1904.11042
2940513107
We present a system for generating inconspicuous-looking textures that, when displayed in the physical world as digital or printed posters, cause visual object tracking systems to become confused. For instance, as a target being tracked by a robot's camera moves in front of such a poster, our generated texture makes the tracker lock onto it and allows the target to evade. This work aims to fool seldom-targeted regression tasks, and in particular compares diverse optimization strategies: non-targeted, targeted, and a new family of guided adversarial losses. While we use the Expectation Over Transformation (EOT) algorithm to generate physical adversaries that fool tracking models when imaged under diverse conditions, we compare the impacts of different conditioning variables, including viewpoint, lighting, and appearances, to find practical attack setups with high resulting adversarial strength and convergence speed. We further showcase textures optimized solely using simulated scenes can confuse real-world tracking systems.
Early physical adversarial attacks, which assumed access to the victim model's internals, iteratively ran gradient-based methods such as FGSM @cite_21 to make printable adversaries that are effective under somewhat varying views @cite_27 . Similar approaches were used to create eyeglass frames for fooling face recognition models @cite_31 @cite_7 , and by the RP @math algorithm @cite_0 to make stop signs look like speed limits to a road sign classifier. Both systems only updated gradients within a in the image, corresponding to the eyeglass frame or road sign. Still, neither work explicitly accounted for the effects of lighting on the imaged items.
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_0", "@cite_27", "@cite_31" ], "mid": [ "", "2963207607", "2798302089", "2963542245", "2535873859" ], "abstract": [ "", "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle (field test) for the target classifier.", "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work has assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from a cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection." ] }
1904.11042
2940513107
We present a system for generating inconspicuous-looking textures that, when displayed in the physical world as digital or printed posters, cause visual object tracking systems to become confused. For instance, as a target being tracked by a robot's camera moves in front of such a poster, our generated texture makes the tracker lock onto it and allows the target to evade. This work aims to fool seldom-targeted regression tasks, and in particular compares diverse optimization strategies: non-targeted, targeted, and a new family of guided adversarial losses. While we use the Expectation Over Transformation (EOT) algorithm to generate physical adversaries that fool tracking models when imaged under diverse conditions, we compare the impacts of different conditioning variables, including viewpoint, lighting, and appearances, to find practical attack setups with high resulting adversarial strength and convergence speed. We further showcase textures optimized solely using simulated scenes can confuse real-world tracking systems.
Expectation Over Transformation (EOT) @cite_10 formalized the strategy used by @cite_31 @cite_0 of optimizing for adversarial attributes of a mask by applying a combination of random transformations to it. By varying the appearance and or position of a 2-D photograph or 3-D textured object as mask, EOT-based attacks @cite_10 @cite_17 @cite_19 generated physically-realizable adversaries robust within a range of viewing conditions. Our adversarial attack is also based on EOT, but we importantly study the efficacy and the need to randomize over different transformation variables, including foreground background appearances, lighting, and spatial locations of the camera, adversary, and surrounding objects.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_31", "@cite_10", "@cite_17" ], "mid": [ "2798302089", "2952911150", "2535873859", "2963557656", "" ], "abstract": [ "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle (field test) for the target classifier.", "Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose the direct perturbation of physical parameters that underly image formation: lighting and geometry. As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.", "Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.", "Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems. We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.", "" ] }
1904.11042
2940513107
We present a system for generating inconspicuous-looking textures that, when displayed in the physical world as digital or printed posters, cause visual object tracking systems to become confused. For instance, as a target being tracked by a robot's camera moves in front of such a poster, our generated texture makes the tracker lock onto it and allows the target to evade. This work aims to fool seldom-targeted regression tasks, and in particular compares diverse optimization strategies: non-targeted, targeted, and a new family of guided adversarial losses. While we use the Expectation Over Transformation (EOT) algorithm to generate physical adversaries that fool tracking models when imaged under diverse conditions, we compare the impacts of different conditioning variables, including viewpoint, lighting, and appearances, to find practical attack setups with high resulting adversarial strength and convergence speed. We further showcase textures optimized solely using simulated scenes can confuse real-world tracking systems.
CAMOU is a attack that also applied EOT to find adversarial textures for a car's 3-D model, such that object detection networks would ignore it in images produced by a photo-realistic rendering engine. CAMOU approximated the gradient of an adversarial objective through both the complex rendering process and opaque victim network, by using a learned surrogate mapping @cite_8 from the texture space directly onto the detector's confidence score. Despite their success, this method was not tested in real-world settings, and also incurs high computational costs and potential instability risks by alternating between the optimizations of the surrogate model and adversarial perturbations.
{ "cite_N": [ "@cite_8" ], "mid": [ "2603766943" ], "abstract": [ "Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder." ] }
1904.11042
2940513107
We present a system for generating inconspicuous-looking textures that, when displayed in the physical world as digital or printed posters, cause visual object tracking systems to become confused. For instance, as a target being tracked by a robot's camera moves in front of such a poster, our generated texture makes the tracker lock onto it and allows the target to evade. This work aims to fool seldom-targeted regression tasks, and in particular compares diverse optimization strategies: non-targeted, targeted, and a new family of guided adversarial losses. While we use the Expectation Over Transformation (EOT) algorithm to generate physical adversaries that fool tracking models when imaged under diverse conditions, we compare the impacts of different conditioning variables, including viewpoint, lighting, and appearances, to find practical attack setups with high resulting adversarial strength and convergence speed. We further showcase textures optimized solely using simulated scenes can confuse real-world tracking systems.
DeepBillboard @cite_14 attacked autonomous driving systems by creating adversarial billboards that caused the victim model to deviate its predicted steering angles within real-world drive-by sequences. While our work shares many commonalities with DeepBillboard, we confront added challenges by attacking a sequential tracking model rather than a per-frame regression task, and we also contrast the results of diverse adversarial optimization objectives.
{ "cite_N": [ "@cite_14" ], "mid": [ "2906946247" ], "abstract": [ "Deep Neural Networks (DNNs) have been widely applied in many autonomous systems such as autonomous driving. Recently, DNN testing has been intensively studied to automatically generate adversarial examples, which inject small-magnitude perturbations into inputs to test DNNs under extreme situations. While existing testing techniques prove to be effective, they mostly focus on generating digital adversarial perturbations (particularly for autonomous driving), e.g., changing image pixels, which may never happen in physical world. There is a critical missing piece in the literature on autonomous driving testing: understanding and exploiting both digital and physical adversarial perturbation generation for impacting steering decisions. In this paper, we present DeepBillboard, a systematic physical-world testing approach targeting at a common and practical driving scenario: drive-by billboards. DeepBillboard is capable of generating a robust and resilient printable adversarial billboard, which works under dynamic changing driving conditions including viewing angle, distance, and lighting. The objective is to maximize the possibility, degree, and duration of the steering-angle errors of an autonomous vehicle driving by the generated adversarial billboard. We have extensively evaluated the efficacy and robustness of DeepBillboard through conducting both digital and physical-world experiments. Results show that DeepBillboard is effective for various steering models and scenes. Furthermore, DeepBillboard is sufficiently robust and resilient for generating physical-world adversarial billboard tests for real-world driving under various weather conditions. To the best of our knowledge, this is the first study demonstrating the possibility of generating realistic and continuous physical-world tests for practical autonomous driving systems." ] }
1904.11163
2941176686
The problem of Scene flow estimation in depth videos has been attracting attention of researchers of robot vision, due to its potential application in various areas of robotics. The conventional scene flow methods are difficult to use in reallife applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both generator and descriptor ends. The proposed network is the first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a large RGB-D benchmark sceneflow dataset.
Scene flow estimation using deep networks is an active area of research @cite_18 . We discuss recent advances in scene flow estimation, generative adversarial networks and their applications in structure prediction problems in separate subsections.
{ "cite_N": [ "@cite_18" ], "mid": [ "2887479417" ], "abstract": [ "Occlusions play an important role in disparity and optical flow estimation, since matching costs are not available in occluded areas and occlusions indicate depth or motion boundaries. Moreover, occlusions are relevant for motion segmentation and scene flow estimation. In this paper, we present an efficient learning-based approach to estimate occlusion areas jointly with disparities or optical flow. The estimated occlusions and motion boundaries clearly improve over the state-of-the-art. Moreover, we present networks with state-of-the-art performance on the popular KITTI benchmark and good generic performance. Making use of the estimated occlusions, we also show improved results on motion segmentation and scene flow estimation." ] }
1904.11163
2941176686
The problem of Scene flow estimation in depth videos has been attracting attention of researchers of robot vision, due to its potential application in various areas of robotics. The conventional scene flow methods are difficult to use in reallife applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both generator and descriptor ends. The proposed network is the first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a large RGB-D benchmark sceneflow dataset.
Classical scene flow estimation methods are generally based on data extracted from sequence of images obtained from multiple cameras, stereo images and depth data. Scene flow was first proposed by using multi-view images. They obtained multi-view scene flow from optical flow and surface geometry. Usually such methods use some 3D reconstruction procedure. Scene flow based on stereo image from binocular setting often involves joint estimation of optical flow and disparity @cite_7 @cite_2 . Though, some scene flow estimation methods are based on stereo images decoupled into stereo and motion estimation @cite_21 . formulated the structure and scene flow in point cloud representation. Scene flow by enforcing depth discontinuity using image segmentation information was introduced by . It computes both the 3D motion and the 3D structure. Most of these methods uses variational framework. However, proposed scene flow estimation method based on dense interpolation of sparse matches from stereo images. The variational optimization was used at later stage for refinement.
{ "cite_N": [ "@cite_21", "@cite_7", "@cite_2" ], "mid": [ "2024336175", "2088692258", "" ], "abstract": [ "Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.", "This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm.", "" ] }
1904.11163
2941176686
The problem of Scene flow estimation in depth videos has been attracting attention of researchers of robot vision, due to its potential application in various areas of robotics. The conventional scene flow methods are difficult to use in reallife applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both generator and descriptor ends. The proposed network is the first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a large RGB-D benchmark sceneflow dataset.
For motion estimation based on deep networks, the availability of large dataset was a challenge. Since acquiring motion data for naturalistic scene was tedious, introduced FlyingThings3D synthetic dataset. Recently, motion estimation based on deep network have shown the promise of such methods. The introduction of FlyingThings3D dataset gave boost to such CNN based methods for motion estimation. They were also the first to apply CNN for scene flow estimation by proposing SceneFlowNet . SceneFlowNet used combined architecture of FlowNet @cite_4 and DispNet @cite_22 for estimating scene flow. This was subsequently revised in FlowNet2 @cite_3 . They addressed the problem of large displacement by stacking different architectures of FlowNet. The small displacement was addressed using small strides in convolution layers. SpyNet @cite_13 used spatial pyramid of input data to reduce the number of training parameters.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_22", "@cite_3" ], "mid": [ "2548527721", "764651262", "2259424905", "2560474170" ], "abstract": [ "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet." ] }
1904.11141
2941218295
Object detection has been a challenging task in computer vision. Although significant progress has been made in object detection with deep neural networks, the attention mechanism is far from development. In this paper, we propose the hybrid attention mechanism for single-stage object detection. First, we present the modules of spatial attention, channel attention and aligned attention for single-stage object detection. In particular, stacked dilated convolution layers with symmetrically fixed rates are constructed to learn spatial attention. The channel attention is proposed with the cross-level group normalization and squeeze-and-excitation module. Aligned attention is constructed with organized deformable filters. Second, the three kinds of attention are unified to construct the hybrid attention mechanism. We then embed the hybrid attention into Retina-Net and propose the efficient single-stage HAR-Net for object detection. The attention modules and the proposed HAR-Net are evaluated on the COCO detection dataset. Experiments demonstrate that hybrid attention can significantly improve the detection accuracy and the HAR-Net can achieve the state-of-the-art 45.8 mAP, outperform existing single-stage object detectors.
CNNs @cite_44 have been proven effective in tackling a variety of visual tasks, including image classification @cite_21 @cite_14 , object detection @cite_34 @cite_19 @cite_31 @cite_45 @cite_18 , semantic and instance segmentation @cite_36 @cite_11 . Here we present a brief review on the object detection methods and the attention mechanism in CNNs.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_36", "@cite_21", "@cite_44", "@cite_19", "@cite_45", "@cite_31", "@cite_34", "@cite_11" ], "mid": [ "2570343428", "2194775991", "1903029394", "2163605009", "2147800946", "", "2963037989", "2565639579", "2102605133", "" ], "abstract": [ "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "" ] }
1904.11102
2941397484
Fast and efficient path generation is critical for robots operating in complex environments. This motion planning problem is often performed in a robot's actuation or configuration space, where popular pathfinding methods such as A*, RRT*, get exponentially more computationally expensive to execute as the dimensionality increases or the spaces become more cluttered and complex. On the other hand, if one were to save the entire set of paths connecting all pair of locations in the configuration space a priori, one would run out of memory very quickly. In this work, we introduce a novel way of producing fast and optimal motion plans for static environments by using a stepping neural network approach, called OracleNet. OracleNet uses Recurrent Neural Networks to determine end-to-end trajectories in an iterative manner that implicitly generates optimal motion plans with minimal loss in performance in a compact form. The algorithm is straightforward in implementation while consistently generating near-optimal paths in a single, iterative, end-to-end roll-out. In practice, OracleNet generally has fixed-time execution regardless of the configuration space complexity while outperforming popular pathfinding algorithms in complex environments and higher dimensions
The challenge of creating and optimizing motion plans that incorporate the use of neural networks has long been a problem of interest, though computational efficiency in solving for deep neural networks has only recently made this a practical avenue of research. An early attempt aimed to link neural networks to path planning by specifying obstacles into topologically ordered neural maps and using neural activity gradient to trace the shortest path, with neural activity evolving towards a state corresponding to a minimum of a Lyapunov function @cite_13 . More recently, a method was developed that enables the representation of high dimensional humanoid movements in the low-dimensional latent space of a time-dependent variational autoencoder framework @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2569181534", "2163650400" ], "abstract": [ "Dynamic movement primitives (DMPs) are powerful for the generalization of movements from demonstration. However, high dimensional movements, as they are found in robotics, make finding efficient DMP representations difficult. Typically, they are either used in configuration or Cartesian space, but both approaches do not generalize well. Additionally, limiting DMPs to single demonstrations restricts their generalization capabilities. In this paper, we explore a method that embeds DMPs into the latent space of a time-dependent variational autoencoder framework. Our method enables the representation of high-dimensional movements in a low-dimensional latent space. Experimental results show that our framework has excellent generalization in the latent space, e.g., switching between movements or changing goals. Also, it generates optimal movements when reproducing the movements.", "Abstract A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding both static and moving obstacles of arbitrary shape. The model assumes that an (external) input activates a target neuron, corresponding to the target position, and specifies obstacles in the topologically ordered neural map. The path follows from the neural network dynamics and the neural activity gradient in the topologically ordered map. The analytical results are supported by computer simulations to illustrate the performance of the network." ] }
1904.11102
2941397484
Fast and efficient path generation is critical for robots operating in complex environments. This motion planning problem is often performed in a robot's actuation or configuration space, where popular pathfinding methods such as A*, RRT*, get exponentially more computationally expensive to execute as the dimensionality increases or the spaces become more cluttered and complex. On the other hand, if one were to save the entire set of paths connecting all pair of locations in the configuration space a priori, one would run out of memory very quickly. In this work, we introduce a novel way of producing fast and optimal motion plans for static environments by using a stepping neural network approach, called OracleNet. OracleNet uses Recurrent Neural Networks to determine end-to-end trajectories in an iterative manner that implicitly generates optimal motion plans with minimal loss in performance in a compact form. The algorithm is straightforward in implementation while consistently generating near-optimal paths in a single, iterative, end-to-end roll-out. In practice, OracleNet generally has fixed-time execution regardless of the configuration space complexity while outperforming popular pathfinding algorithms in complex environments and higher dimensions
Reinforcement Learning (RL) approaches have also been proposed for motion planning applications @cite_9 @cite_14 . Recently, a fully differentiable approximation of the value-iteration algorithm was introduced that is capable of predicting outcomes that involve planning-based reasoning @cite_23 . However, their use of Convolutional Neural Networks to represent this approximation limits their motion planning to only 2D grids, while generalized motion planning algorithms can be extended to arbitrary dimensions. RL assumes that the problem has the structure of a Markov Decision Process where the agent attempts to solve the problem through a trial-and-error based interaction with the real environment. On the other hand, classical motion planning algorithms take in a full state information map as a part of the planning problem and output a solution without a single interaction with the real environment. The algorithm presented in this work leverages the assumptions used in the latter and thus is different from RL-based approaches.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_23" ], "mid": [ "2104733512", "", "2258731934" ], "abstract": [ "Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.", "", "We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains." ] }
1904.10944
2967509406
This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim. The main contributions are the recovery of local shapes from contact, an approach to reconstruct the tactile shape of objects from tactile imprints, and an accurate method for object localization of previously reconstructed objects. The algorithms can be applied to a large variety of 3D objects and provide accurate tactile feedback for in-hand manipulation.Results show that by exploiting the dense tactile information we can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing. We provide videos and supplemental information in the project’s website web.mit.edu mcube research tactile localization.html.
In this work we use GelSlim @cite_0 , a tactile sensor based on Gelsight @cite_5 that provides high resolution tactile imprints in the form of images. The original Gelsight sensor has been used for object localization of small objects @cite_4 , to complement a vision-based tracker @cite_3 , or recently to recover 3D shapes using also vision and prior shape models . However, its design is bulky for practical use in complex manipulation tasks. Instead, GelSlim is integrated in a slim finger that facilitates manipulation . Leveraging GelSlim's high resolution, we show that our approach can reconstruct tactile maps of objects and use them efficiently to identify and recover their location.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_3" ], "mid": [ "2076776712", "2607241646", "2421020186", "2737380187" ], "abstract": [ "In this work, we propose to reconstruct a complete three-dimensional (3-D) model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterated extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.", "Hardness is among the most important attributes of an object that humans learn about through touch. However, approaches for robots to estimate hardness are limited, due to the lack of information provided by current tactile sensors. In this work, we address these limitations by introducing a novel method for hardness estimation, based on the GelSight tactile sensor, and the method does not require accurate control of contact conditions or the shape of objects. A GelSight has a soft contact interface, and provides high resolution tactile images of contact geometry, as well as contact force and slip conditions. In this paper, we try to use the sensor to measure hardness of objects with multiple shapes, under a loosely controlled contact condition. The contact is made manually or by a robot hand, while the force and trajectory are unknown and uneven. We analyze the data using a deep constitutional (and recurrent) neural network. Experiments show that the neural net model can estimate the hardness of objects with different shapes and hardness ranging from 8 to 87 in Shore 00 scale.", "Robust manipulation and insertion of small parts can be challenging because of the small tolerances typically involved. The key to robust control of these kinds of manipulation interactions is accurate tracking and control of the parts involved. Typically, this is accomplished using visual servoing or force-based control. However, these approaches have drawbacks. Instead, we propose a new approach that uses tactile sensing to accurately localize the pose of a part grasped in the robot hand. Using a feature-based matching technique in conjunction with a newly developed tactile sensing technology known as GelSight that has much higher resolution than competing methods, we synthesize high-resolution height maps of object surfaces. As a result of these high-resolution tactile maps, we are able to localize small parts held in a robot hand very accurately. We quantify localization accuracy in benchtop experiments and experimentally demonstrate the practicality of the approach in the context of a small parts insertion problem.", "We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified second-order update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robot's end effector." ] }
1904.11041
2942420306
Person re-identification becomes a more and more important task due to its wide applications. In practice, person re-identification still remains challenging due to the variation of person pose, different lighting, occlusion, misalignment, background clutter, etc. In this paper, we propose a multi-scale body-part mask guided attention network (MMGA), which jointly learns whole-body and part body attention to help extract global and local features simultaneously. In MMGA, body-part masks are used to guide the training of corresponding attention. Experiments show that our proposed method can reduce the negative influence of variation of person pose, misalignment and background clutter. Our method achieves rank-1 mAP of 95.0 87.2 on the Market1501 dataset, 89.5 78.1 on the DukeMTMC-reID dataset, outperforming current state-of-the-art methods.
Human body mask obtained from image segmentation models can be used to handle the background clutter problem. With deep learning based image segmentation algorithms including Mask RCNN @cite_0 , JPPNet @cite_1 , Dense Pose @cite_19 , etc., human body mask can be extracted well and the background region can be almost removed. However, there are only a few works @cite_25 @cite_28 @cite_11 introducing semantic segmentation into re-ID task. This scarcity is due to large computational complexity involved in semantic segmentation for human mask. In our work, we just utilize the mask to guide the training of our attention model so that the mask is only needed in training phase. After we get the learning metric, mask is no longer needed to extract features which is time-saving compared with other mask based re-ID algorithms.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_0", "@cite_19", "@cite_25", "@cite_11" ], "mid": [ "2884030607", "2963978393", "", "2964145484", "2798775284", "2963805953" ], "abstract": [ "In this work, we tackle the problem of person search, which is a challenging task consisted of pedestrian detection and person re-identification (re-ID). Instead of sharing representations in a single joint model, we find that separating detector and re-ID feature extraction yields better performance. In order to extract more representative features for each identity, we propose a simple yet effective re-ID method, which models foreground person and original image patches individually, and obtains enriched representations from two separate CNN streams. On the standard person search benchmark datasets, we achieve mAP of (83.0 ) and (32.6 ) respectively for CUHK-SYSU and PRW, surpassing the state of the art by a large margin (more than 5 pp).", "Human parsing and pose estimation have recently received considerable interest due to their substantial application potentials. However, the existing datasets have limited numbers of images and annotations and lack a variety of human appearances and coverage of challenging cases in unconstrained environments. In this paper, we introduce a new benchmark named “Look into Person (LIP)” that provides a significant advancement in terms of scalability, diversity, and difficulty, which are crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels and 16 body joints, which are captured from a broad range of viewpoints, occlusions, and background complexities. Using these rich annotations, we perform detailed analyses of the leading human parsing and pose estimation approaches, thereby obtaining insights into the successes and failures of these methods. To further explore and take advantage of the semantic correlation of these two tasks, we propose a novel joint human parsing and pose estimation network to explore efficient context modeling, which can simultaneously predict parsing and pose with extremely high quality. Furthermore, we simplify the network to solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into the parsing results without resorting to extra supervision. The datasets, code and models are available at http: www.sysu-hcp.net lip .", "", "In this paper we propose to learn a mapping from image pixels into a dense template grid through a fully convolutional network. We formulate this task as a regression problem and train our network by leveraging upon manually annotated facial landmarks in-the-wild. We use such landmarks to establish a dense correspondence field between a three-dimensional object template and the input image, which then serves as the ground-truth for training our regression system. We show that we can combine ideas from semantic segmentation with regression networks, yielding a highly-accurate quantized regression architecture. Our system, called DenseReg, allows us to estimate dense image-to-template correspondences in a fully convolutional manner. As such our network can provide useful correspondence information as a stand-alone system, while when used as an initialization for Statistical Deformable Models we obtain landmark localization results that largely outperform the current state-of-the-art on the challenging 300W benchmark. We thoroughly evaluate our method on a host of facial analysis tasks, and demonstrate its use for other correspondence estimation tasks, such as the human body and the human ear. DenseReg code is made available at http: alpguler.com DenseReg.html along with supplementary materials.", "Person Re-identification (ReID) is an important yet challenging task in computer vision. Due to the diverse background clutters, variations on viewpoints and body poses, it is far from solved. How to extract discriminative and robust features invariant to background clutters is the core problem. In this paper, we first introduce the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then we design a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions. Moreover, we propose a novel region-level triplet loss to restrain the features learnt from different regions, i.e., pulling the features from the full image and body region close, whereas pushing the features from backgrounds away. We may be the first one to successfully introduce the binary mask into person ReID task and the first one to propose region-level contrastive learning. We evaluate the proposed method on three public datasets, including MARS, Market-1501 and CUHK03. Extensive experimental results show that the proposed method is effective and achieves the state-of-the-art results. Mask and code will be released upon request.", "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by 17 in mAP and 6 in rank-1, CUHK03 [24] by 4 in rank-1 and DukeMTMC-reID [50] by 24 in mAP and 10 in rank-1." ] }
1904.11171
2941087613
Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH.
Cross-modal hashing has become an active research topic in literatures due to its high efficiency and low storage cost in cross-modal retrieval @cite_10 @cite_15 . Various techniques have been proposed for CMH. They can be roughly divided into two categories: unsupervised methods and supervised methods. Unsupervised hashing methods learn the hash codes and functions with only the paired unlabeled training samples. Inter-Media Hashing (IMH) @cite_0 finds a common Hamming space for learning hash functions, where inter-modality and intra-modality consistency are explored to produce hash codes. Linear Cross-Modal Hashing (LCMH) @cite_19 learns the common space and hash functions by representing training samples from all modalities with lower dimensional approximations. It preserves the similarity of multimedia documents and reduces the time and space complexity. Collective Matrix Factorization Hashing (CMFH) @cite_14 learns hash codes with collective matrix factorization during the offline phase. Latent Semantic Sparse Hashing (LSSH) @cite_12 learns the latent semantic features through sparse coding and matrix factorization, and then maps them to a joint abstraction space to obtain the binary hash codes.
{ "cite_N": [ "@cite_14", "@cite_0", "@cite_19", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2512032049", "2049993534", "2159373756", "2197084977", "2759194679", "2086958058" ], "abstract": [ "By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus, Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, and so on) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark data sets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.", "In this paper, we present a new multimedia retrieval paradigm to innovate large-scale search of heterogenous multimedia data. It is able to return results of different media types from heterogeneous data sources, e.g., using a query image to retrieve relevant text documents or images from different data sources. This utilizes the widely available data from different sources and caters for the current users' demand of receiving a result list simultaneously containing multiple types of data to obtain a comprehensive understanding of the query's results. To enable large-scale inter-media retrieval, we propose a novel inter-media hashing (IMH) model to explore the correlations among multiple media types from different data sources and tackle the scalability issue. To this end, multimedia data from heterogeneous data sources are transformed into a common Hamming space, in which fast search can be easily implemented by XOR and bit-count operations. Furthermore, we integrate a linear regression model to learn hashing functions so that the hash codes for new data points can be efficiently generated. Experiments conducted on real-world large-scale multimedia datasets demonstrate the superiority of our proposed method compared with state-of-the-art techniques.", "Most existing cross-modal hashing methods suffer from the scalability issue in the training phase. In this paper, we propose a novel cross-modal hashing approach with a linear time complexity to the training data size, to enable scalable indexing for multimedia search across multiple modals. Taking both the intra-similarity in each modal and the inter-similarity across different modals into consideration, the proposed approach aims at effectively learning hash functions from large-scale training datasets. More specifically, for each modal, we first partition the training data into @math clusters and then represent each training data point with its distances to @math centroids of the clusters. Interestingly, such a k-dimensional data representation can reduce the time complexity of the training phase from traditional O(n2) or higher to O(n), where @math is the training data size, leading to practical learning on large-scale datasets. We further prove that this new representation preserves the intra-similarity in each modal. To preserve the inter-similarity among data points across different modals, we transform the derived data representations into a common binary subspace in which binary codes from all the modals are \"consistent\" and comparable. nThe transformation simultaneously outputs the hash functions for all modals, which are used to convert unseen data into binary codes. Given a query of one modal, it is first mapped into the binary codes using the modal's hash functions, followed by matching the database binary codes of any other modals. Experimental results on two benchmark datasets confirm the scalability and the effectiveness of the proposed approach in comparison with the state of the art.", "Binary coding or hashing techniques are recognized to accomplish efficient near neighbor search, and have thus attracted broad interests in the recent vision and learning studies. However, such studies have rarely been dedicated to Maximum Inner Product Search (MIPS), which plays a critical role in various vision applications. In this paper, we investigate learning binary codes to exclusively handle the MIPS problem. Inspired by the latest advance in asymmetric hashing schemes, we propose an asymmetric binary code learning framework based on inner product fitting. Specifically, two sets of coding functions are learned such that the inner products between their generated binary codes can reveal the inner products between original data vectors. We also propose an alternative simpler objective which maximizes the correlations between the inner products of the produced binary codes and raw data vectors. In both objectives, the binary codes and coding functions are simultaneously learned without continuous relaxations, which is the key to achieving high-quality binary codes. We evaluate the proposed method, dubbed Asymmetric Inner-product Binary Coding (AIBC), relying on the two objectives on several large-scale image datasets. Both of them are superior to the state-of-the-art binary coding and hashing methods in performing MIPS tasks.", "In this paper, we propose a cross-modal deep variational hashing (CMDVH) method for cross-modality multimedia retrieval. Unlike existing cross-modal hashing methods which learn a single pair of projections to map each example as a binary vector, we design a couple of deep neural network to learn non-linear transformations from image-text input pairs, so that unified binary codes can be obtained. We then design the modality-specific neural networks in a probabilistic manner where we model a latent variable as close as possible from the inferred binary codes, which is approximated by a posterior distribution regularized by a known prior. Experimental results on three benchmark datasets show the efficacy of the proposed approach.", "Similarity search methods based on hashing for effective and efficient cross-modal retrieval on large-scale multimedia databases with massive text and images have attracted considerable attention. The core problem of cross-modal hashing is how to effectively construct correlation between multi-modal representations which are heterogeneous intrinsically in the process of hash function learning. Analogous to Canonical Correlation Analysis (CCA), most existing cross-modal hash methods embed the heterogeneous data into a joint abstraction space by linear projections. However, these methods fail to bridge the semantic gap more effectively, and capture high-level latent semantic information which has been proved that it can lead to better performance for image retrieval. To address these challenges, in this paper, we propose a novel Latent Semantic Sparse Hashing (LSSH) to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization. In particular, LSSH uses Sparse Coding to capture the salient structures of images, and Matrix Factorization to learn the latent concepts from text. Then the learned latent semantic features are mapped to a joint abstraction space. Moreover, an iterative strategy is applied to derive optimal solutions efficiently, and it helps LSSH to explore the correlation between multi-modal representations efficiently and automatically. Finally, the unified hashcodes are generated through the high level abstraction space by quantization. Extensive experiments on three different datasets highlight the advantage of our method under cross-modal scenarios and show that LSSH significantly outperforms several state-of-the-art methods." ] }
1904.11171
2941087613
Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH.
Supervised hashing methods achieve better results by exploiting semantic information to enhance correlation across different modalities. Semantic Correlation Maximization (SCM) @cite_11 maximizes the correlation between different modalities by seamlessly integrating semantic label into the hashing learning for large-scale data. Semantics-Preserving Hashing (SePH) @cite_6 learns the semantics-preserving hash codes by minimizing the KL-divergence of derived probability distribution in the hamming space from that in semantic space. Discriminant Cross-modal Hashing (DCH) @cite_20 is proposed to jointly learn the binary codes, the linear classifiers and the hash functions.
{ "cite_N": [ "@cite_20", "@cite_6", "@cite_11" ], "mid": [ "2591669147", "2526152041", "2203543769" ], "abstract": [ "Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.", "For efficiently retrieving nearest neighbors from large-scale multiview data, recently hashing methods are widely investigated, which can substantially improve query speeds. In this paper, we propose an effective probability-based semantics-preserving hashing (SePH) method to tackle the problem of cross-view retrieval. Considering the semantic consistency between views, SePH generates one unified hash code for all observed views of any instance. For training, SePH first transforms the given semantic affinities of training data into a probability distribution, and aims to approximate it with another one in Hamming space, via minimizing their Kullback–Leibler divergence. Specifically, the latter probability distribution is derived from all pair-wise Hamming distances between to-be-learnt hash codes of the training data. Then with learnt hash codes, any kind of predictive models like linear ridge regression, logistic regression, or kernel logistic regression, can be learnt as hash functions in each view for projecting the corresponding view-specific features into hash codes. As for out-of-sample extension, given any unseen instance, the learnt hash functions in its observed views can predict view-specific hash codes. Then by deriving or estimating the corresponding output probabilities with respect to the predicted view-specific hash codes, a novel probabilistic approach is further proposed to utilize them for determining a unified hash code. To evaluate the proposed SePH, we conduct extensive experiments on diverse benchmark datasets, and the experimental results demonstrate that SePH is reasonable and effective.", "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability." ] }
1904.10961
2941872438
A novel method of contrast enhancement is proposed for underexposed images, in which heavy noise is hidden. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For these reasons, various contrast enhancement methods have been proposed so far. These methods, however, have two problems: (1) The loss of details in bright regions due to over-enhancement of contrast. (2) The noise is amplified in dark regions because conventional enhancement methods do not consider noise included in images. The proposed method aims to overcome these problems. In the proposed method, a shadow-up function is applied to adaptive gamma correction with weighting distribution, and a denoising filter is also used to avoid noise being amplified in dark regions. As a result, the proposed method allows us not only to enhance contrast of dark regions, but also to avoid amplifying noise, even under strong noise environments.
Retinex theory is based on the relation, @math , where original image S is the product of illumination L and reflectance R. When the information of only one surround is used for the conversion of each pixel, its approach is called Single-Scale Retinex (SSR) @cite_5 . In SSR, halo artifacts occur unnaturally in the boundary of regions with large gradient values. To solve this problem, Multi-Scale Retinex (MSR) @cite_12 was proposed. However, since a logarithmic transformation is used, MSR still causes a problem that the results do not stabilize due to the influence of noise in dark areas. Simultaneous reflection & illumination estimation (SRIE) @cite_7 and weighted variation model (WVM) @cite_9 are also Retinex-based methods. These methods have a good performance for images without noise, but some strange areas are generated in strong noise environments. Therefore, many outstanding methods @cite_19 @cite_11 @cite_20 have been proposed to improve the quality of images, and preserve more details.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_19", "@cite_5", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "1640745651", "2468596194", "2150721269", "1759950926", "2963228457", "2147421915", "2566376500" ], "abstract": [ "In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain. We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain. A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance. To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem. The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate. Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments.", "We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.", "Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center surround retinex to a multiscale version that achieves simultaneous dynamic range compression color consistency lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.", "The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711.", "Many low-light enhancement methods ignore intensive noise in original images. As a result, they often simultaneously enhance the noise as well. Furthermore, extra denoising procedures adopted by most methods ruin the details. In this paper, we introduce a joint low-light enhancement and denoising strategy, aimed at obtaining well-enhanced low-light images while getting rid of the inherent noise issue simultaneously. The proposed method performs Retinex model based decomposition in a successive sequence, which sequentially estimates a piece-wise smoothed illumination and a noise-suppressed reflectance. After getting the illumination and reflectance map, we adjust the illumination layer and generate our enhancement result. In this noise-suppressed sequential decomposition process we enforce the spatial smoothness on each component and skillfully make use of weight matrices to suppress the noise and improve the contrast. Results of extensive experiments demonstrate the effectiveness and practicability of our method. It performs well for a wide variety of images, and achieves better or comparable quality compared with the state-of-the-art methods.", "The retinex is a human perception-based image processing algorithm which provides color constancy and dynamic range compression. We have previously reported on a single-scale retinex (SSR) and shown that it can either achieve color lightness rendition or dynamic range compression, but not both simultaneously. We now present a multi-scale retinex (MSR) which overcomes this limitation for most scenes. Both color rendition and dynamic range compression are successfully accomplished except for some \"pathological\" scenes that have very strong spectral characteristics in a single band.", "When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency." ] }
1904.10961
2941872438
A novel method of contrast enhancement is proposed for underexposed images, in which heavy noise is hidden. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For these reasons, various contrast enhancement methods have been proposed so far. These methods, however, have two problems: (1) The loss of details in bright regions due to over-enhancement of contrast. (2) The noise is amplified in dark regions because conventional enhancement methods do not consider noise included in images. The proposed method aims to overcome these problems. In the proposed method, a shadow-up function is applied to adaptive gamma correction with weighting distribution, and a denoising filter is also used to avoid noise being amplified in dark regions. As a result, the proposed method allows us not only to enhance contrast of dark regions, but also to avoid amplifying noise, even under strong noise environments.
The histogram equalization (HE) @cite_14 is one of the most popular algorithms for contrast enhancement @cite_8 and various extended versions of HE have been proposed @cite_4 @cite_13 @cite_17 @cite_1 @cite_10 @cite_6 . Contrast enhancement using adaptive gamma correction with weighting distribution (AGCWD) @cite_1 aims to prevent over-enhancement and under-enhancement caused by using adaptive gamma correction and a modified probability distribution. However, the over-enhancement and the loss of contrast in bright areas are still caused under the use of these histogram-based methods. Some noise hidden in the darkness is also amplified. Because of such a situation, a number of histogram-based contrast enhancement methods have been proposed to prevent the noise amplification. In the methods, a shrinkage function is used for preventing the noise amplification. Low light image enhancement based on two-step noise suppression (LLIE) @cite_15 uses both noise level function (NLF) and just noticeable difference (JND) for contrast enhancement with noise suppression. Although this method can reduce some noise, it does not preserve details in bright areas as with histogram-based methods. Another way for enhancing images is to use a multi-exposure image fusion method by using photos with different exposures @cite_18 @cite_21 @cite_16 @cite_2 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_10", "@cite_21", "@cite_1", "@cite_6", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2949809359", "2154549868", "2756339420", "2097633527", "", "", "2016622085", "", "", "2593264379", "", "2139375301", "2012554041" ], "abstract": [ "This paper proposes a novel multi-exposure image fusion method based on exposure compensation. Multi-exposure image fusion is a method to produce images without color saturation regions, by using photos with different exposures. However, in conventional works, it is unclear how to determine appropriate exposure values, and moreover, it is difficult to set appropriate exposure values at the time of photographing due to time constraints. In the proposed method, the luminance of the input multi-exposure images is adjusted on the basis of the relationship between exposure values and pixel values, where the relationship is obtained by assuming that a digital camera has a linear response function. The use of a local contrast enhancement method is also considered to improve input multi-exposure images. The compensated images are finally combined by one of existing multi-exposure image fusion methods. In some experiments, the effectiveness of the proposed method are evaluated in terms of the tone mapped image quality index, statistical naturalness, and discrete entropy, by comparing the proposed one with conventional ones.", "An experiment intended to evaluate the clinical application of contrast-limited adaptive histogram equalization (CLAHE) to chest computer tomography (CT) images is reported. A machine especially designed to compute CLAHE in a few seconds is discussed. It is shown that CLAHE can be computed in 4 s after 5-s loading time using the specially designed parallel engine made from a few thousand dollars worth of off-the-shelf components. The processing appears to be useful for a wide range of medical images, but the limitations of observer calibration make it impossible to demonstrate such usefulness by agreement experiments. >", "Among image enhancement methods, histogram equalization (HE) has received the most attention because of its intuitive implementation quality, high efficiency, and the monotonicity of its intensity mapping function. However, HE is indiscriminate and overemphasizes the contrast around intensities with large pixel populations but little visual importance. To address this issue, we propose an HE-based method that adaptively controls the contrast gain according to the potential visual importance of intensities and pixels. Observing that in natural scenes image details are usually hidden in darker regions that have noticeable local differences, we formulate the potential visual importance on the basis of the multi-resolution, dark-pass filtered gradients in the image. Experiments show that our method is highly discriminating in terms of noises and trivial image gradients, and it guarantees great global contrast preservation.", "Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide an overview of underlying concepts, along with algorithms commonly used for image enhancement. The paper focuses on spatial domain techniques for image enhancement, with particular reference to point processing methods and histogram processing.", "", "", "This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image-enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.", "", "", "In low light condition, the signal-to-noise ratio (SNR) is low and thus the captured images are seriously degraded by noise. Since low light images contain much noise in flat and dark regions, contrast enhancement without considering noise characteristics causes serious noise amplification. In this paper, we propose low light image enhancement based on two-step noise suppression. First, we perform noise aware contrast enhancement using noise level function (NLF). NLF is used to get a noise aware histogram which prevents noise amplification, and we use the noise aware histogram in contrast enhancement. However, the increase of intensity by contrast enhancement reduces the visibility threshold, which makes noise visible by human eyes. Second, we utilize a just noticeable difference (JND) model from luminance adaptation to suppress noise based on human visual perception. Experimental results show that the proposed method successfully enhances contrast in low light images while minimizing noise amplification.", "", "Histogram equalization is widely used for contrast enhancement in a variety of applications due to its simple function and effectiveness. Examples include medical image processing and radar signal processing. One drawback of the histogram equalization can be found on the fact that the brightness of an image can be changed after the histogram equalization, which is mainly due to the flattening property of the histogram equalization. Thus, it is rarely utilized in consumer electronic products such as TV where preserving the original input brightness may be necessary in order not to introduce unnecessary visual deterioration. This paper proposes a novel extension of histogram equalization to overcome such a drawback of histogram equalization. The essence of the proposed algorithm is to utilize independent histogram equalizations separately over two subimages obtained by decomposing the input image based on its mean with a constraint that the resulting equalized subimages are bounded by each other around the input mean. It is shown mathematically that the proposed algorithm preserves the mean brightness of a given image significantly well compared to typical histogram equalization while enhancing the contrast and, thus, provides a natural enhancement that can be utilized in consumer electronic products.", "Histogram equalization is a simple and effective image enhancing technique. But in some conditions, the luminance of an image may be changed significantly after the equalizing process, this is why it has never been utilized in a video system in the past. A novel histogram equalization technique, equal area dualistic sub-image histogram equalization, is put forward in this paper. First, the image is decomposed into two equal area sub-images based on its original probability density function. Then the two sub-images are equalized respectively. Finally, we obtain the results after the processed sub-images are composed into one image. The simulation results indicate that the algorithm can not only enhance the image information effectively but also preserve the original image luminance well enough to make it possible to be used in a video system directly." ] }
1904.10961
2941872438
A novel method of contrast enhancement is proposed for underexposed images, in which heavy noise is hidden. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For these reasons, various contrast enhancement methods have been proposed so far. These methods, however, have two problems: (1) The loss of details in bright regions due to over-enhancement of contrast. (2) The noise is amplified in dark regions because conventional enhancement methods do not consider noise included in images. The proposed method aims to overcome these problems. In the proposed method, a shadow-up function is applied to adaptive gamma correction with weighting distribution, and a denoising filter is also used to avoid noise being amplified in dark regions. As a result, the proposed method allows us not only to enhance contrast of dark regions, but also to avoid amplifying noise, even under strong noise environments.
Image denoising has a great tradition in the research field of signal processing because of its fundamental role in many applications. In particular, block-matching and 3D filtering (BM3D) @cite_0 is one of the most successful advances. In this paper, BM3D is used as one of noise suppression techniques. Our purpose is not only to enhance contrast with noise suppression, but also to preserve details in bright regions based on Retinex theory. figure* figure*
{ "cite_N": [ "@cite_0" ], "mid": [ "2056370875" ], "abstract": [ "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality." ] }
1904.10873
2940904973
This paper proposes a novel Stochastic Split Linearized Bregman Iteration ( @math -LBI) algorithm to efficiently train the deep network. The @math -LBI introduces an iterative regularization path with structural sparsity. Our @math -LBI combines the computational efficiency of the LBI, and model selection consistency in learning the structural sparsity. The computed solution path intrinsically enables us to enlarge or simplify a network, which theoretically, is benefited from the dynamics property of our @math -LBI algorithm. The experimental results validate our @math -LBI on MNIST and CIFAR-10 dataset. For example, in MNIST, we can either boost a network with only 1.5K parameters (1 convolutional layer of 5 filters, and 1 FC layer), achieves 98.40 recognition accuracy; or we simplify @math of parameters in LeNet-5 network, and still achieves the 98.47 recognition accuracy. In addition, we also have the learning results on ImageNet, which will be added in the next version of our report.
It is essential to regularize the networks, such as dropout @cite_42 preventing the co-adaptation, and adding @math or @math regularization to weights. In particular, @math regularization enforces the sparsity on the weights and results in a compact, memory-efficient network with slightly sacrificing the prediction performance @cite_3 . Further, group sparsity regularization @cite_20 can also been applied to deep networks with desirable properties. Alvarez @cite_46 utilized a group sparsity regularizer to automatically decide the optimal number of neuron groups. The structured sparsity @cite_31 @cite_28 has also been investigated to exert good data locality and group sparsity. In contrast, the regularization term of @math -LBI proposed can not only enforce the structured sparsity, but also can efficiently compute the regularization solution paths of each variable.
{ "cite_N": [ "@cite_46", "@cite_28", "@cite_42", "@cite_3", "@cite_31", "@cite_20" ], "mid": [ "", "2741485222", "2095705004", "1570197553", "2554931888", "2138019504" ], "abstract": [ "", "", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.", "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80 while retaining or even improving the network accuracy.", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods." ] }
1904.10873
2940904973
This paper proposes a novel Stochastic Split Linearized Bregman Iteration ( @math -LBI) algorithm to efficiently train the deep network. The @math -LBI introduces an iterative regularization path with structural sparsity. Our @math -LBI combines the computational efficiency of the LBI, and model selection consistency in learning the structural sparsity. The computed solution path intrinsically enables us to enlarge or simplify a network, which theoretically, is benefited from the dynamics property of our @math -LBI algorithm. The experimental results validate our @math -LBI on MNIST and CIFAR-10 dataset. For example, in MNIST, we can either boost a network with only 1.5K parameters (1 convolutional layer of 5 filters, and 1 FC layer), achieves 98.40 recognition accuracy; or we simplify @math of parameters in LeNet-5 network, and still achieves the 98.47 recognition accuracy. In addition, we also have the learning results on ImageNet, which will be added in the next version of our report.
Most previous works of expanding network are mostly focused on transfer learning life-long learning @cite_10 @cite_39 @cite_15 @cite_36 , or knowledge distill @cite_24 . In contrast, to the best of knowledge, this work, for the first time, can enlarge a network in a more general setting in the training process: our strategy is more general and can start from a very small network. The inverse process of growing network, is to simplify a network. Comparing to network pruning algorithms @cite_17 @cite_43 @cite_18 @cite_0 @cite_4 @cite_14 @cite_21 @cite_16 , our strategy directly removes less important parameters by using the solution path computed by @math -LBI. The backward selection algorithm does not introduce any additional computational cost in learning to prune the networks.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_36", "@cite_21", "@cite_39", "@cite_24", "@cite_43", "@cite_0", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1996901117", "2962965870", "2963674932", "2473930607", "2619444510", "1991564165", "1821462560", "2964228333", "2104636679", "2186494677", "2808168148", "2756154119", "2963424132" ], "abstract": [ "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.", "Convolutional neural networks (CNNs) have state-of-the-art performance on many problems in machine vision. However, networks with superior performance often have millions of weights so that it is difficult or impossible to use CNNs on computationally limited devices or to humanly interpret them. A myriad of CNN compression approaches have been proposed and they involve pruning and compressing the weights and filters. In this article, we introduce a greedy structural compression scheme that prunes filters in a trained CNN. We define a filter importance index equal to the classification accuracy reduction (CAR) of the network after pruning that filter (similarly defined as RAR for regression). We then iteratively prune filters based on the CAR index. This algorithm achieves substantially higher classification accuracy in AlexNet compared to other structural compression schemes that prune filters. Pruning half of the filters in the first or second layer of AlexNet, our CAR algorithm achieves 26 and 20 higher classification accuracies respectively, compared to the best benchmark filter pruning scheme. Our CAR algorithm, combined with further weight pruning and compressing, reduces the size of first or second convolutional layer in AlexNet by a factor of 42, while achieving close to original classification accuracy through retraining (or fine-tuning) network. Finally, we demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities. In fact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters as opposed to shape-selective filters. To our knowledge, this is the first reported result on the connection between compression and interpretability of CNNs.", "Learning provides a useful tool for the automatic design of autonomous robots. Recent research on learning robot control has predominantly focussed on learning single tasks that were studied in isolation. If robots encounter a multitude of control learning tasks over their entire lifetime there is an opportunity to transfer knowledge between them. In order to do so, robots may learn the invariants and the regularities of their individual tasks and environments. This task-independent knowledge can be employed to bias generalization when learning control, which reduces the need for real-world experimentation. We argue that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios. Two approaches to lifelong robot learning which both capture invariant knowledge about the robot and its environments are presented. Both approaches have been evaluated using a HERO-2000 mobile robot. Learning tasks included navigation in unknown indoor environments and a simple find-and-fetch task.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two), our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code will be made publicly available.", "This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., @math 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 @math with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 @math accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] .", "In this work we aim at extending the theoretical foundations of lifelong learning. Previous work analyzing this scenario is based on the assumption that learning tasks are sampled i.i.d. from a task environment or limited to strongly constrained data distributions. Instead, we study two scenarios when lifelong learning is possible, even though the observed tasks do not form an i.i.d. sample: first, when they are sampled from the same environment, but possibly with dependencies, and second, when the task environment is allowed to change over time in a consistent way. In the first case we prove a PAC-Bayesian theorem that can be seen as a direct generalization of the analogous previous result for the i.i.d. case. For the second scenario we propose to learn an inductive bias in form of a transfer procedure. We present a generalization bound and show on a toy example how it can be used to identify a beneficial transfer algorithm.", "", "CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that growing a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.", "Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it’s as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16× smaller than full- precision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04 , 0.16 , 0.36 , respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3 of Top-1 accuracy and outperforms previous ternary models by 3 ." ] }
1904.10637
2949925139
Blame games tend to follow major disruptions, be they financial crises, natural disasters or terrorist attacks. To study how the blame game evolves and shapes the dominant crisis narratives is of great significance, as sense-making processes can affect regulatory outcomes, social hierarchies, and cultural norms. However, it takes tremendous time and efforts for social scientists to manually examine each relevant news article and extract the blame ties (A blames B). In this study, we define a new task, Blame Tie Extraction, and construct a new dataset related to the United States financial crisis (2007-2010) from The New York Times, The Wall Street Journal and USA Today. We build a Bi-directional Long Short-Term Memory (BiLSTM) network for contexts where the entities appear in and it learns to automatically extract such blame ties at the document level. Leveraging the large unsupervised model such as GloVe and ELMo, our best model achieves an F1 score of 70 on the test set for blame tie extraction, making it a useful tool for social scientists to extract blame ties more efficiently.
NLP has become increasingly popular in the social science area. o2010tweets [ o2010tweets ] aligned sentiment measured from Twitter to public opinion measured from polls, and found the two correlate well. bamman2015open [ bamman2015open ] tried to use text data to estimate the political ideologies of individuals. mohammad-EtAl:2016:SemEval [ mohammad-EtAl:2016:SemEval ] create the SemEval 2016 Task 6 called Stance Detection Task, which detects the Twitter user's stance towards a target of interest. preoctiuc2017beyond [ preoctiuc2017beyond ] also predicted the political ideologies of Twitter users, in a more fine-grained form. Social scientists used NLP along with network analysis etc. to analyze social media texts @cite_17 and the State of Union addresses in United States @cite_20 .
{ "cite_N": [ "@cite_20", "@cite_17" ], "mid": [ "2525230286", "1927629402" ], "abstract": [ "Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, whereas the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create “cultural bridges,” or produce messages that combine conversational themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 years, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research.", "This study reveals that the entry into World War I in 1917 indexed the decisive transition to the modern period in American political consciousness, ushering in new objects of political discourse, a more rapid pace of change of those objects, and a fundamental reframing of the main tasks of governance. We develop a strategy for identifying meaningful categories in textual corpora that span long historic durees, where terms, concepts, and language use changes. Our approach is able to account for the fluidity of discursive categories over time, and to analyze their continuity by identifying the discursive stream as the object of interest." ] }
1904.10637
2949925139
Blame games tend to follow major disruptions, be they financial crises, natural disasters or terrorist attacks. To study how the blame game evolves and shapes the dominant crisis narratives is of great significance, as sense-making processes can affect regulatory outcomes, social hierarchies, and cultural norms. However, it takes tremendous time and efforts for social scientists to manually examine each relevant news article and extract the blame ties (A blames B). In this study, we define a new task, Blame Tie Extraction, and construct a new dataset related to the United States financial crisis (2007-2010) from The New York Times, The Wall Street Journal and USA Today. We build a Bi-directional Long Short-Term Memory (BiLSTM) network for contexts where the entities appear in and it learns to automatically extract such blame ties at the document level. Leveraging the large unsupervised model such as GloVe and ELMo, our best model achieves an F1 score of 70 on the test set for blame tie extraction, making it a useful tool for social scientists to extract blame ties more efficiently.
The Blame Tie Extraction task can be regarded as a special case of relation extraction @cite_10 . Relation extraction solves the task of classifying a pair of entities into one of several pre-defined categories, such as Cause-Effect and Component-Whole @cite_0 , while the Blame Tie Extraction task requires extracting all the blame ties among the entities of interest in an article. Our work differs from existing work on relation extraction in two main aspects. First, our work is at the document level and the data is sparse, while most existing work on relation extraction focuses on the sentence level @cite_8 . Second, our work explicitly uses entity prior information on blame patterns, which does not make sense in general domain relation extraction. Most existing work mixes entity and content information in modeling relations.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_8" ], "mid": [ "2099779943", "2229639163", "2251622960" ], "abstract": [ "We present a brief overview of the main challenges in the extraction of semantic relations from English text, and discuss the shortcomings of previous data sets and shared tasks. This leads us to introduce a new task, which will be part of SemEval-2010: multi-way classification of mutually exclusive semantic relations between pairs of common nominals. The task is designed to compare different approaches to the problem and to provide a standard testbed for future research, which can benefit many applications in Natural Language Processing.", "We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.", "Up to now, relation extraction systems have made extensive use of features generated by linguistic analysis modules. Errors in these features lead to errors of relation detection and classification. In this work, we depart from these traditional approaches with complicated feature engineering by introducing a convolutional neural network for relation extraction that automatically learns features from sentences and minimizes the dependence on external toolkits and resources. Our model takes advantages of multiple window sizes for filters and pre-trained word embeddings as an initializer on a non-static architecture to improve the performance. We emphasize the relation extraction problem with an unbalanced corpus. The experimental results show that our system significantly outperforms not only the best baseline systems for relation extraction but also the state-of-the-art systems for relation classification." ] }
1904.10637
2949925139
Blame games tend to follow major disruptions, be they financial crises, natural disasters or terrorist attacks. To study how the blame game evolves and shapes the dominant crisis narratives is of great significance, as sense-making processes can affect regulatory outcomes, social hierarchies, and cultural norms. However, it takes tremendous time and efforts for social scientists to manually examine each relevant news article and extract the blame ties (A blames B). In this study, we define a new task, Blame Tie Extraction, and construct a new dataset related to the United States financial crisis (2007-2010) from The New York Times, The Wall Street Journal and USA Today. We build a Bi-directional Long Short-Term Memory (BiLSTM) network for contexts where the entities appear in and it learns to automatically extract such blame ties at the document level. Leveraging the large unsupervised model such as GloVe and ELMo, our best model achieves an F1 score of 70 on the test set for blame tie extraction, making it a useful tool for social scientists to extract blame ties more efficiently.
In blame game research, social scientists care more about a few key players instead of all the entities, and most entities in the passage are irrelevant for studying the blame game @cite_18 . Therefore, in this paper, we assume that the entities of interest are already given, and we only need to extract blame ties from these entities.
{ "cite_N": [ "@cite_18" ], "mid": [ "2495579072" ], "abstract": [ "This article takes the 2008–2010 financial crisis as a case study to explore the tension between responsibility and accountability in complex crises. I analyze the patterns of attribution and assumption of responsibility of thirty-three bankers in Wall Street, interviewed from fall 2008 to summer 2010. First, I show that responsibility for complex failures cannot be easily attributed or assumed: responsibility becomes diluted within the collective. Actors can only assume collective responsibility, recognizing that they belong to an institution at fault. Second, I show that blaming is a social process that should be examined contextually, relationally, and dynamically. I build on sociological theories to depart from the normative focus of philosophers, and the cognitive focus of psychologists, who have dominated the study of responsibility so far." ] }
1904.10788
2937584914
In this paper, we are interested in exploiting textual and acoustic data of an utterance for the speech emotion classification task. The baseline approach models the information from audio and text independently using two deep neural networks (DNNs). The outputs from both the DNNs are then fused for classification. As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data. The proposed framework uses two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance. Furthermore, we propose an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities. The multi-hop attention first computes the relevant segments of the textual data corresponding to the audio signal. The relevant textual data is then applied to attend parts of the audio signal. To evaluate the performance of the proposed system, experiments are performed in the IEMOCAP dataset. Experimental results show that the proposed technique outperforms the state-of-the-art system by 6.5 relative improvement in terms of weighted accuracy.
Along with classical algorithms based models such as support vector machine (SVM), hidden markov model (HMM) and decision tree @cite_18 @cite_3 @cite_10 , various neural network architectures have been recently introduced for the speech emotion recognition task. For example, convolutional neural network (CNN)-based models were trained on spectrograms or audio features such as mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) @cite_9 @cite_20 @cite_1 . More complex models such as @cite_14 were designed to better learn nonlinear decision boundaries of emotional speech and achieved the best-recorded performance in audio modality models on IEMOCAP dataset @cite_22 . Several neural network models with attention mechanism have been proposed to efficiently focus on a prominent part of speech and learn temporal dependency within whole utterance @cite_17 @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_9", "@cite_1", "@cite_3", "@cite_24", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2000838212", "2747664154", "2146334809", "2707551695", "", "", "2625297138", "", "", "2889374687" ], "abstract": [ "Automatic recognition of emotional states from human speech is a current research topic with a wide range. In this paper an attempt has been made to recognize and classify the speech emotion from three language databases, namely, Berlin, Japan and Thai emotion databases. Speech features consisting of Fundamental Frequency (F0), Energy, Zero Crossing Rate (ZCR), Linear Predictive Coding (LPC) and Mel Frequency Cepstral Coefficient (MFCC) from short-time wavelet signals are comprehensively investigated. In this regard, Support Vector Machines (SVM) is utilized as the classification model. Empirical experimentation shows that the combined features of F0, Energy and MFCC provide the highest accuracy on all databases provided using the linear kernel. It gives 89.80 , 93.57 and 98.00 classification accuracy for Berlin, Japan and Thai emotions databases, respectively.", "", "Since emotions are expressed through a combination of verbal and non-verbal channels, a joint analysis of speech and gestures is required to understand expressive human communication. To facilitate such investigations, this paper describes a new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory (SAIL) at the University of Southern California (USC). This database was recorded from ten actors in dyadic sessions with markers on the face, head, and hands, which provide detailed information about their facial expressions and hand movements during scripted and spontaneous spoken communication scenarios. The actors performed selected emotional scripts and also improvised hypothetical scenarios designed to elicit specific types of emotions (happiness, anger, sadness, frustration and neutral state). The corpus contains approximately 12 h of data. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.", "We propose a real-time Convolutional Neural Network model for speech emotion detection. Our model is trained from raw audio on a small dataset of TED talks speech data, manually annotated into three emotion classes: “Angry”, “Happy” and “Sad”. It achieves an average accuracy of 66.1 , 5 higher than a feature-based SVM baseline, with an evaluation time of few hundred milliseconds. We also provide an in-depth model visualization and analysis. We show how our neural network effectively activates during the speech sections of the waveform regardless of the emotion, ignoring the silence parts which do not contain information. On the frequency domain the CNN filters distribute throughout all the spectrum range, with higher concentration around the average pitch range related to that emotion. Each filter also activates at multiple frequency intervals, presumably due to the additional contribution of amplitude-related feature learning. Our work will allow faster and more accurate emotion detection modules for human-machine empathetic dialog systems and other related applications.", "", "", "Automatic emotion recognition from speech is a challenging task which relies heavily on the effectiveness of the speech features used for classification. In this work, we study the use of deep learning to automatically discover emotionally relevant features from speech. It is shown that using a deep recurrent neural network, we can learn both the short-time frame-level acoustic features that are emotionally relevant, as well as an appropriate temporal aggregation of those features into a compact utterance-level representation. Moreover, we propose a novel strategy for feature pooling over time which uses local attention in order to focus on specific regions of a speech signal that are more emotionally salient. The proposed solution is evaluated on the IEMOCAP corpus, and is shown to provide more accurate predictions compared to existing emotion recognition algorithms.", "", "", "This paper proposes an attention pooling based representation learning method for speech emotion recognition (SER). The emotional representation is learned in an end-to-end fashion by applying a deep convolutional neural network (CNN) directly to spectrograms extracted from speech utterances. Motivated by the success of GoogleNet, two groups of filters with different shapes are designed to capture both temporal and frequency domain context information from the input spectrogram. The learned features are concatenated and fed into the subsequent convolutional layers. To learn the final emotional representation, a novel attention pooling method is further proposed. Compared with the existing pooling methods, such as max-pooling and average-pooling, the proposed attention pooling can effectively incorporate class-agnostic bottom-up, and class-specific top-down, attention maps. We conduct extensive evaluations on benchmark IEMOCAP data to assess the effectiveness of the proposed representation. Results demonstrate a recognition performance of 71.8 weighted accuracy (WA) and 68 unweighted accuracy (UA) over four emotions, which outperforms the state-of-the-art method by about 3 absolute for WA and 4 for UA." ] }
1904.10788
2937584914
In this paper, we are interested in exploiting textual and acoustic data of an utterance for the speech emotion classification task. The baseline approach models the information from audio and text independently using two deep neural networks (DNNs). The outputs from both the DNNs are then fused for classification. As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data. The proposed framework uses two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance. Furthermore, we propose an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities. The multi-hop attention first computes the relevant segments of the textual data corresponding to the audio signal. The relevant textual data is then applied to attend parts of the audio signal. To evaluate the performance of the proposed system, experiments are performed in the IEMOCAP dataset. Experimental results show that the proposed technique outperforms the state-of-the-art system by 6.5 relative improvement in terms of weighted accuracy.
Multi-modal approaches using acoustic features and textual information have been investigated. @cite_25 identified emotional key phrases and salience of verbal cues from both phoneme sequences and words. Recently, @cite_13 @cite_23 combined acoustic information and conversation transcripts using a neural network-based model to improve emotion classification accuracy. However, none of these studies utilized attention method over audio and text modality in tandem for contextual understanding of the emotion in audio recording.
{ "cite_N": [ "@cite_13", "@cite_25", "@cite_23" ], "mid": [ "2962770129", "2140801466", "2889169802" ], "abstract": [ "Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8 to 71.8 .", "In this paper we introduce a novel approach to the combination of acoustic features and language information for a most robust automatic recognition of a speaker's emotion. Seven discrete emotional states are classified throughout the work. Firstly a model for the recognition of emotion by acoustic features is presented. The derived features of the signal-, pitch-, energy, and spectral contours are ranked by their quantitative contribution to the estimation of an emotion. Several different classification methods including linear classifiers, Gaussian mixture models, neural nets, and support vector machines are compared by their performance within this task. Secondly an approach to emotion recognition by the spoken content is introduced applying belief network based spotting for emotional key-phrases. Finally the two information sources are integrated in a soft decision fusion by using a neural net. The gain is evaluated and compared to other advances. Two emotional speech corpora used for training and evaluation are described in detail and the results achieved applying the propagated novel advance to speaker emotion recognition are presented and discussed.", "" ] }
1904.10551
2942135187
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an optimisation problem. Very limited work has been done on automatic hyperparameter tuning and AutoML in the multi-label domain. This paper attempts to fill this gap by proposing a neural network algorithm, CascadeML, to train multi-label neural network based on cascade neural networks. This method requires minimal or no hyperparameter tuning and also considers pairwise label associations. The cascade algorithm grows the network architecture incrementally in a two phase process as it learns the weights using adaptive first order gradient algorithm, therefore omitting the requirement of preselecting the number of hidden layers, nodes and the learning rate. The method was tested on 10 multi-label datasets and compared with other multi-label classification algorithms. Results show that CascadeML performs very well without hyperparameter tuning.
The first neural-network based multi-label algorithm, BackPropagation in Multi-Label Learning (BPMLL), was proposed by in 2006 @cite_25 . It is a single hidden layer, fully connected feed-forward architecture, which uses the back-propagation of error algorithm to optimise a variation of the ranking loss function @cite_39 that takes pairwise label associations into account. This loss function can be defined as follows:
{ "cite_N": [ "@cite_25", "@cite_39" ], "mid": [ "2119466907", "2114315281" ], "abstract": [ "In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms", "Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made towards this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes." ] }
1904.10551
2942135187
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an optimisation problem. Very limited work has been done on automatic hyperparameter tuning and AutoML in the multi-label domain. This paper attempts to fill this gap by proposing a neural network algorithm, CascadeML, to train multi-label neural network based on cascade neural networks. This method requires minimal or no hyperparameter tuning and also considers pairwise label associations. The cascade algorithm grows the network architecture incrementally in a two phase process as it learns the weights using adaptive first order gradient algorithm, therefore omitting the requirement of preselecting the number of hidden layers, nodes and the learning rate. The method was tested on 10 multi-label datasets and compared with other multi-label classification algorithms. Results show that CascadeML performs very well without hyperparameter tuning.
For BPMLL, like any neural network algorithm, the number of hidden units has to be determined, which is a hyperparameter to be tuned. In @cite_37 modifications to the BPMLL loss function were proposed. This modified version learns the network as in BPMLL, and also learns the values using which the predicted scores are thresholded to get label assignments.
{ "cite_N": [ "@cite_37" ], "mid": [ "2136661605" ], "abstract": [ "This paper considers the multilabel classification problem, which is a generalization of traditional two-class or multi-class classification problem. In multilabel classification a set of labels (categories) is given and each training instance is associated with a subset of this label-set. The task is to output the appropriate subset of labels (generally of unknown size) for a given, unknown testing instance. Some improvements to the existing neural network multilabel classification algorithm, named BP-MLL, are proposed here. The modifications concern the form of the global error function used in BP-MLL. The modified classification system is tested in the domain of functional genomics, on the yeast genome data set. Experimental results show that proposed modifications visibly improve the performance of the neural network based multilabel classifier. The results are statistically significant." ] }
1904.10551
2942135187
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an optimisation problem. Very limited work has been done on automatic hyperparameter tuning and AutoML in the multi-label domain. This paper attempts to fill this gap by proposing a neural network algorithm, CascadeML, to train multi-label neural network based on cascade neural networks. This method requires minimal or no hyperparameter tuning and also considers pairwise label associations. The cascade algorithm grows the network architecture incrementally in a two phase process as it learns the weights using adaptive first order gradient algorithm, therefore omitting the requirement of preselecting the number of hidden layers, nodes and the learning rate. The method was tested on 10 multi-label datasets and compared with other multi-label classification algorithms. Results show that CascadeML performs very well without hyperparameter tuning.
Some work involving deep neural networks on computer vision and image recognition were done in @cite_12 @cite_19 @cite_30 @cite_7 , which uses multi-label datasets as a part of the training pipeline. Similarly, convolutional neural networks was extended to predict multi-label images in @cite_6 . In @cite_1 the feature space of multi-label classification was modified using deep belief networks such that the labels become less dependent, after which well-known multi-label algorithms are applied in the modified space.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_1", "@cite_6", "@cite_19", "@cite_12" ], "mid": [ "2562379650", "1567302070", "1884029234", "", "", "2792156255" ], "abstract": [ "Multi-label image classification is a challenging problem in computer vision. Motivated by the recent development in image classification performance using Deep Neural Networks, in this work, we propose a flexible deep Convolutional Neural Network (CNN) framework, called Local-Global-CNN (LGC), to improve multi-label image classification performance. LGC consists of firstly a local level multi-label classifier which takes object segment hypotheses as inputs to a local CNN. The output results of these local hypotheses are aggregated together with max-pooling and then re-weighted to consider the label co-occurrence or interdependencies information by using a graphical model in the label space. LGC also utilizes a global CNN that is trained by multi-label images to directly predict the multiple labels from the input. The predictions of local and global level classifiers are finally fused together to obtain MAP estimation of the final multi-label prediction. The above LGC framework could benefit from a pre-train process with a large-scale single-label image dataset, e.g., ImageNet. Experimental results have shown that the proposed framework could achieve promising performance on Pascal VOC2007 and VOC2012 multi-label image dataset.", "Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5 by HCP only and 93.2 after the fusion with our complementary result in [12] based on hand-crafted features on the VOC 2012 dataset.", "In multi-label classification, the main focus has been to develop ways of learning the underlying dependencies between labels, and to take advantage of this at classification time. Developing better feature-space representations has been predominantly employed to reduce complexity, e.g., by eliminating non-helpful feature attributes from the input space prior to (or during) training. This is an important task, since many multi-label methods typically create many different copies or views of the same input data as they transform it, and considerable memory can be saved by taking advantage of redundancy. In this paper, we show that a proper development of the feature space can make labels less interdependent and easier to model and predict at inference time. For this task we use a deep learning approach with restricted Boltzmann machines. We present a deep network that, in an empirical evaluation, outperforms a number of competitive methods from the literature", "", "", "Abstract Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods." ] }
1904.10551
2942135187
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an optimisation problem. Very limited work has been done on automatic hyperparameter tuning and AutoML in the multi-label domain. This paper attempts to fill this gap by proposing a neural network algorithm, CascadeML, to train multi-label neural network based on cascade neural networks. This method requires minimal or no hyperparameter tuning and also considers pairwise label associations. The cascade algorithm grows the network architecture incrementally in a two phase process as it learns the weights using adaptive first order gradient algorithm, therefore omitting the requirement of preselecting the number of hidden layers, nodes and the learning rate. The method was tested on 10 multi-label datasets and compared with other multi-label classification algorithms. Results show that CascadeML performs very well without hyperparameter tuning.
AutoML algorithms focusing on multi-label specific problems are approached in @cite_32 @cite_2 , using genetic algorithms to train and select multi-label models. @cite_31 propose an extension of an existing multi-class AutoML tool for multi-label. Except these works, no other AutoML based or automatic hyperparameter tuning based work on the multi-label domain was found.
{ "cite_N": [ "@cite_31", "@cite_32", "@cite_2" ], "mid": [ "2899653065", "2735881160", "2888647113" ], "abstract": [ "Automated machine learning (AutoML) has received increasing attention in the recent past. While the main tools for AutoML, such as Auto-WEKA, TPOT, and auto-sklearn, mainly deal with single-label classification and regression, there is very little work on other types of machine learning tasks. In particular, there is almost no work on automating the engineering of machine learning applications for multi-label classification. This paper makes two contributions. First, it discusses the usefulness and feasibility of an AutoML approach for multi-label classification. Second, we show how the scope of ML-Plan, an AutoML-tool for multi-class classification, can be extended towards multi-label classification using MEKA, which is a multi-label extension of the well-known Java library WEKA. The resulting approach recursively refines MEKA's multi-label classifiers, which sometimes nest another multi-label classifier, up to the selection of a single-label base learner provided by WEKA. In our evaluation, we find that the proposed approach yields superb results and performs significantly better than a set of baselines.", "Given a new dataset for classification in Machine Learning (ML), finding the best classification algorithm and the best configuration of its (hyper)-parameters for that particular dataset is an open issue. The Automatic ML (Auto-ML) area has emerged to solve this task. With this issue in mind, in this work we are interested in a specific type of classification problem, called multi-label classification (MLC). In MLC, each example in the dataset can be associated to one or more class labels, making the task considerably harder than traditional, single-label classification. In addition, the cost of learning raises due to the higher complexity of the data. Although the literature has proposed some methods to solve the Auto-ML task, those methods address only the traditional, single-label classification problem. By contrast, this work proposes the first method (an evolutionary algorithm) for solving the Auto-ML task in MLC, i.e., the first method for automatically selecting and configuring the best MLC algorithm for a given input dataset. The proposed evolutionary algorithm is evaluated on three MLC datasets, and compared against two baseline methods according to four different multi-label predictive accuracy measures. The results show that the proposed evolutionary algorithm is competitive against the baselines, but there is still room for improvement.", "This paper proposes ( Auto-MEKA _ GGP ), an Automated Machine Learning (Auto-ML) method for Multi-Label Classification (MLC) based on the MEKA tool, which offers a number of MLC algorithms. In MLC, each example can be associated with one or more class labels, making MLC problems harder than conventional (single-label) classification problems. Hence, it is essential to select an MLC algorithm and its configuration tailored (optimized) for the input dataset. ( Auto-MEKA _ GGP ) addresses this problem with two key ideas. First, a large number of choices of MLC algorithms and configurations from MEKA are represented into a grammar. Second, our proposed Grammar-based Genetic Programming (GGP) method uses that grammar to search for the best MLC algorithm and configuration for the input dataset. ( Auto-MEKA _ GGP ) was tested in 10 datasets and compared to two well-known MLC methods, namely Binary Relevance and Classifier Chain, and also compared to GA-Auto-MLC, a genetic algorithm we recently proposed for the same task. Two versions of ( Auto-MEKA _ GGP ) were tested: a full version with the proposed grammar, and a simplified version where the grammar includes only the algorithmic components used by GA-Auto-MLC. Overall, the full version of ( Auto-MEKA _ GGP ) achieved the best predictive accuracy among all five evaluated methods, being the winner in six out of the 10 datasets." ] }
1904.10551
2942135187
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time. A common but trivial approach is to train individual binary classifiers per label, but the performance can be improved by considering associations within the labels. Like with any machine learning algorithm, hyperparameter tuning is important to train a good multi-label classifier model. The task of selecting the best hyperparameter settings for an algorithm is an optimisation problem. Very limited work has been done on automatic hyperparameter tuning and AutoML in the multi-label domain. This paper attempts to fill this gap by proposing a neural network algorithm, CascadeML, to train multi-label neural network based on cascade neural networks. This method requires minimal or no hyperparameter tuning and also considers pairwise label associations. The cascade algorithm grows the network architecture incrementally in a two phase process as it learns the weights using adaptive first order gradient algorithm, therefore omitting the requirement of preselecting the number of hidden layers, nodes and the learning rate. The method was tested on 10 multi-label datasets and compared with other multi-label classification algorithms. Results show that CascadeML performs very well without hyperparameter tuning.
The cascade correlation neural network approach @cite_10 was an early AutoML method. In cascade correlation neural networks training starts with a simple perceptron network, which is grown incrementally by adding new cascaded layers with skip-level connections as long as performance on a validation dataset improves. Since the proposal of the original cascade correlation algorithm in @cite_10 , various improvements that follow a similar overall process to the original method have been proposed, for example in @cite_16 @cite_18 @cite_14 @cite_36 , as well as Cascade2 @cite_8 . Active research in this field, however, is fairly limited. As it is the basis for CascadeML, the Cascade2 algorithm is described in detail in the next section.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_36", "@cite_16", "@cite_10" ], "mid": [ "191220328", "1543146175", "2133640977", "2099790329", "1527646298", "2109779438" ], "abstract": [ "The optimal selection of neural network architecture is a problem of significant theoretical and practical importance. The functional capacity of the network architecture determines the generalization properties and the computational complexity of the algorithm. For supervised training of feed-forward networks, current generalization theories tell us to minimize the functional capacity, i.e., the number of weights subject to the constraints imposed by the rule implicit in the training examples see e.g., (, 91). Several schemes have been proposed for pruning of the network architecture. The optimal brain damage method for network pruning proposed in (, 90) has been successful for pruning of classifiers as well as for pruning of regression networks for time series processing (, 93). The optimal brain surgeon (OBS) advanced in (Hassihi and Stork, 92) provides a refined estimate of the weight saliency, by incorporating the effects of quadratic retraining; however, at the expense of Irignificant additional computation. Likewise, growth algorithms are legio. Among these, Fahlman’s cascade correlation stands out for solving hard classification problems (Fahlman and Lebiere, 89). In the standard implementation cascade correlation adds hidden units that are fully connected to all input units and to all previously added hidden units, thereby rapidly increasing the number of weights in the network with the obvious danger of overfitting.", "", "Abstract Six learning algorithms are investigated and compared empirically. All of them are based on variants of the candidate training idea of the Cascade Correlation method. The comparison was performed using 42 different datasets from the PROBEN1 benchmark collection. The results indicate: (1) for these problems it is slightly better not to cascade the hidden units; (2) error minimization candidate training is better than covariance maximization for regression problems but may be a little worse for classification problems; (3) for most learning tasks, considering validation set errors during the selection of the best candidate will not lead to improved networks, but for a few tasks it will. © 1997 Elsevier Science Ltd.", "The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time. >", "Abstract : The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it learns to perform a given task. The resulting network's size and topology are chosen specifically for this task. In the resulting 'cascade' networks, each new hidden unit receives incoming connections from all input and pre-existing hidden units. In effect, each new unit adds a new layer to the network. This allows Cascade-Correlation to create complex feature detectors, but it typically results in a network that is deeper, in terms of the longest path from input to output, than is necessary to solve the problem efficiently. In this paper we investigate a simple variation of Cascade-Correlation that will build deep nets if necessary, but that is biased toward minimizing network depth. We demonstrate empirically, across a range of problems, that this simple technique can reduce network depth, often dramatically. However, we show that this technique does not, in general, reduce the total number of weights or improve the generalization ability of the resulting networks.", "Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network." ] }
1904.10675
2942453398
The traditional cycle of industrial products has been linear since its inception. Raw resources are acquired, processed, distributed, used and ultimately disposed of. This linearity has led to a dangerously low efficiency degree in resource use, and has brought forth serious concerns for the viability of our natural ecosystem. Circular economy is introducing a circular workflow for the lifetime of products. It generalizes the disposal phase, reconnecting it to manufacturing, distribution and end-use, thus limiting true deposition to the environment. This process has not been extended so far to software. Nonetheless, the development of software follows the same phases, and also entails the use-and waste-of considerable resources. This include human effort, as well as human and infrastructure sustenance products such as food, traveling and energy. This paper introduces circular economy principles to the software development, and particularly to network management logic and security. It employs a recently proposed concept-the Socket Store-which is an online store distributing end-user network logic in modular form. The Store modules act as mediators between the end-user network logic and the network resources. It is shown that the Socket Store can implement all circular economy principles to the software life-cycle, with considerable gains in resource waste.
In 1998, proposed the REMOS unified network query interface, which allowed applications to request network topology and congestion information, in order to tune their behavior accordingly @cite_17 . PerfSONAR is a more recent approach towards this direction @cite_9 , employing a more standardized metadata format for describing network characteristics @cite_28 . The Unified Network Information Service @cite_33 , offered an alternative solution with scalability, security and performance benefits. Application domain-specific network querying solutions exist as well, such as MonALISA for distributed systems @cite_5 , and the Network Weather Service for meta-computing @cite_32 . These approaches also employ standardized formats for representing the network status and configuration @cite_12 . In Socket Store terminology, these works are means of exposing the network infrastructure capabilities, allowing researchers to build Socket modules on top of them.
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_9", "@cite_32", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "1986765849", "2119192239", "", "2083209848", "1985987433", "2239064595", "2095629498" ], "abstract": [ "A holistic view of the network is key to the successful operation of many distributed, cloud-based, and service-oriented computing architectures. Supporting network-aware applications and application-driven networks requires a detailed representation of network resources, including multi-layer topologies, associated measurement data, and in-the-network service location and availability information. The rapid development of increasingly configurable and dynamic networks has increased the demand for information services that can accurately and efficiently store and expose the state of the network. This work introduces our Unified Network Information Service (UNIS), designed to represent physical and virtual networks and services. We describe the UNIS network data model and its RESTful interface, which provide a common interface to topology, service, and measurement resources. In addition, we describe the security mechanisms built into the UNIS framework. Our analysis of the UNIS implementation shows significant performance and scalability gains over an existing and widely-deployed topology, service registration, and lookup information service architecture.", "Grid and distributed computing environments are evolving rapidly and driving the development of system and network technologies. The design of applications has placed an increased emphasis upon adapting application behavior based on the performance of the network. In addition, network operators and network researchers are naturally interested in gathering and studying network performance information. This work presents an extensible framework for the storage and exchange of performance measurements. Leveraging existing storage and exchange mechanisms, the proposed framework is capable of handling a wide variety of measurements while delivering performance comparable to that of less flexible, ad-hoc solutions.", "", "Abstract The goal of the Network Weather Service is to provide accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing resources. Providing a ubiquitous service that can both track dynamic performance changes and remain stable in spite of them requires adaptive programming techniques, an architectural design that supports extensibility, and internal abstractions that can be implemented efficiently and portably. In this paper, we describe the current implementation of the NWS for Unix and TCP IP sockets and provide examples of its performance monitoring and forecasting capabilities.", "The MonALISA (Monitoring Agents in a Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information.", "YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF) protocol, NETCONF remote procedure calls, and NETCONF notifications.", "Development of portable network-aware applications demands an interface to the network that allows an application to obtain information about its execution environment. The paper motivates and describes the design of Remos, an API that allows network-aware applications to obtain relevant information. The major challenges in defining a uniform interface are network heterogeneity, diversity in traffic requirements, variability of the information, and resource sharing in the network. Remos addresses these issues with two abstraction levels, explicit management of resource sharing, and statistical measurements. The flows abstraction captures the communication between nodes, and the topologies abstraction provides a logical view of network connectivity. Remos measurements are made at network level, and therefore information to manage sharing of resources is available. Remos is designed to deliver best effort information to applications, and it explicitly adds statistical reliability and variability measures to the core information. The paper also presents preliminary results and experience with a prototype Remos implementation for a high speed IP based network testbed." ] }
1904.10675
2942453398
The traditional cycle of industrial products has been linear since its inception. Raw resources are acquired, processed, distributed, used and ultimately disposed of. This linearity has led to a dangerously low efficiency degree in resource use, and has brought forth serious concerns for the viability of our natural ecosystem. Circular economy is introducing a circular workflow for the lifetime of products. It generalizes the disposal phase, reconnecting it to manufacturing, distribution and end-use, thus limiting true deposition to the environment. This process has not been extended so far to software. Nonetheless, the development of software follows the same phases, and also entails the use-and waste-of considerable resources. This include human effort, as well as human and infrastructure sustenance products such as food, traveling and energy. This paper introduces circular economy principles to the software development, and particularly to network management logic and security. It employs a recently proposed concept-the Socket Store-which is an online store distributing end-user network logic in modular form. The Store modules act as mediators between the end-user network logic and the network resources. It is shown that the Socket Store can implement all circular economy principles to the software life-cycle, with considerable gains in resource waste.
Novel infrastructure capabilities, such as Infrastructure-as-a-Service, can be exposed in reusable component form at the Store via the NFV paradigm @cite_14 . Additionally, the Store can benefit from directly interfacing with SDN technology, which already offers the necessary interfaces to query and modify the network state in a modular manner (e.g., network ) @cite_24 . Based on the advances in SDN and NFV, recent works have proposed new paradigms that can enable QoS guarantees across the Internet, crossing the Autonomous System borders @cite_25 .
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_25" ], "mid": [ "2123372830", "", "2150574682" ], "abstract": [ "Network functions virtualization (NFV) together with software-defined networking (SDN) has the potential to help operators satisfy tight service level agreements, accurately monitor and manipulate network traffic, and minimize operating expenses. However, in scenarios that require packet processing to be redistributed across a collection of network function (NF) instances, simultaneously achieving all three goals requires a framework that provides efficient, coordinated control of both internal NF state and network forwarding state. To this end, we design a control plane called OpenNF. We use carefully designed APIs and a clever combination of events and forwarding updates to address race conditions, bound overhead, and accommodate a variety of NFs. Our evaluation shows that OpenNF offers efficient state control without compromising flexibility, and requires modest additions to NFs.", "", "BGP severely constrains how networks can deliver traffic over the Internet. Today's networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an \"SDX\"), we must create compelling applications, such as \"application-specific peering\"---where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates." ] }
1904.10709
2952267571
Weather Recognition plays an important role in our daily lives and many computer vision applications. However, recognizing the weather conditions from a single image remains challenging and has not been studied thoroughly. Generally, most previous works treat weather recognition as a single-label classification task, namely, determining whether an image belongs to a specific weather class or not. This treatment is not always appropriate, since more than one weather conditions may appear simultaneously in a single image. To address this problem, we make the first attempt to view weather recognition as a multi-label classification task, i.e., assigning an image more than one labels according to the displayed weather conditions. Specifically, a CNN-RNN based multi-label classification approach is proposed in this paper. The convolutional neural network (CNN) is extended with a channel-wise attention model to extract the most correlated visual features. The Recurrent Neural Network (RNN) further processes the features and excavates the dependencies among weather classes. Finally, the weather labels are predicted step by step. Besides, we construct two datasets for the weather recognition task and explore the relationships among different weather conditions. Experimental results demonstrate the superiority and effectiveness of the proposed approach. The new constructed datasets will be available at this https URL.
In recent years, convolutional neural networks have shown overwhelming performance in a variety of computer vision tasks, such as image classification @cite_5 , object detection @cite_16 , semantic segmentation @cite_27 , etc. Several excellent architectures of CNNs are proposed including AlexNet @cite_5 , VGGNet @cite_44 and ResNet @cite_41 , which outperform the traditional approaches to a large extent. Inspired by the great success of CNNs, a few of works attempt to apply CNNs to weather recognition task. Elhoseiny @cite_15 directly fine-tuned AlexNet @cite_5 on a two-class weather classification dataset released by @cite_22 , and achieved a better result. Lu @cite_54 combined hand-crafted weather features with CNNs extracted features, and further improved the classification performance. While as discussed in @cite_54 , there is no closed boundaries among weather classes. Multiple weather conditions may appear simultaneously. Therefore, all the above approaches suffer from the information loss when they treat weather recognition as a single label classification problem. Li @cite_12 proposed to use auxiliary semantic segmentation of weather cues to comprehensively describe the weather conditions. This strategy can alleviate the problem of information loss, while the segmentation mask is not intuitive for humans.
{ "cite_N": [ "@cite_22", "@cite_41", "@cite_54", "@cite_44", "@cite_27", "@cite_5", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "", "2194775991", "", "1686810756", "", "2163605009", "2293009534", "2613718673", "2765717554" ], "abstract": [ "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Weather recognition is important in practice, while this task has not been thoroughly explored so far. The current trend of dealing with this task is treating it as a single classification problem, i.e., determining whether a given image belongs to a certain weather category or not. However, weather recognition differs significantly from traditional image classification, since several weather features may appear simultaneously. In this case, a simple classification result is insufficient to describe the weather condition. To address this issue, we propose to provide auxiliary weather related information for comprehensive weather description. Specifically, semantic segmentation of weather-cues, such as blue sky and white clouds, is exploited as an auxiliary task in this paper. Moreover, a convolutional neural network (CNN) based multi-task framework is developed which aims to concurrently tackle weather category classification task and weather-cues segmentation task. Due to the intrinsic relationships between these two tasks, exploring auxiliary semantic segmentation of weather-cues can also help to learn discriminative features for the classification task, and thus obtain superior accuracy. To verify the effectiveness of the proposed approach, extra segmentation masks of weather-cues are generated manually on an existing weather image dataset. Experimental results have demonstrated the superior performance of our approach. The enhanced dataset, source codes and pre-trained models are available at https: github.com wzgwzg Multitask_Weather." ] }
1904.10748
2951958587
We propose a new concept named adaptive submodularity ratio to study the greedy policy for sequential decision making. While the greedy policy is known to perform well for a wide variety of adaptive stochastic optimization problems in practice, its theoretical properties have been analyzed only for a limited class of problems. We narrow the gap between theory and practice by using adaptive submodularity ratio, which enables us to prove approximation guarantees of the greedy policy for a substantially wider class of problems. Examples of newly analyzed problems include important applications such as adaptive influence maximization and adaptive feature selection. Our adaptive submodularity ratio also provides bounds of adaptivity gaps. Experiments confirm that the greedy policy performs well with the applications being considered compared to standard heuristics.
. Another attempt to relax adaptive submodularity is presented in @cite_2 . They introduced as follows: Analogous to our adaptive submodularity ratio, one can readily see that @math -weak adaptive submodularity is equivalent to the adaptive submodularity. In general, however, there is a difference between the two notions; the adaptive submodularity ratio can be bounded from below by @math , implying that it is more demanding to bound the value of @math than that of the adaptive submodularity ratio. We provide a proof in subsec:proof-yong . studied a problem called and gave a bound of @math , but some vital assumptions seem to have been missed. In sec:counter-yong , we provide a problem instance in which their bound does not hold. We also present instances of adaptive influence maximization and adaptive feature selection for which our framework provides strictly better approximation ratios than those obtained with the weak adaptive submodularity in subsec:comparison-yong-infmax,subsec:comparison-yong-feature .
{ "cite_N": [ "@cite_2" ], "mid": [ "2963391100" ], "abstract": [ "In this paper, we consider adaptive decision-making problems for stochastic state estimation with partial observations. First, we introduce the concept of weak adaptive submodularity, a generalization of adaptive submodularity, which has found great success in solving challenging adaptive state estimation problems. Then, for the problem of active diagnosis, i.e., discrete state estimation via active sensing, we show that an adaptive greedy policy has a near-optimal performance guarantee when the reward function possesses this property. We further show that the reward function for group-based active diagnosis, which arises in applications such as medical diagnosis and state estimation with persistent sensor faults, is also weakly adaptive submodular. Finally, in experiments of state estimation for an aircraft electrical system with persistent sensor faults, we observe that an adaptive greedy policy performs equally well as an exhaustive search." ] }
1904.10748
2951958587
We propose a new concept named adaptive submodularity ratio to study the greedy policy for sequential decision making. While the greedy policy is known to perform well for a wide variety of adaptive stochastic optimization problems in practice, its theoretical properties have been analyzed only for a limited class of problems. We narrow the gap between theory and practice by using adaptive submodularity ratio, which enables us to prove approximation guarantees of the greedy policy for a substantially wider class of problems. Examples of newly analyzed problems include important applications such as adaptive influence maximization and adaptive feature selection. Our adaptive submodularity ratio also provides bounds of adaptivity gaps. Experiments confirm that the greedy policy performs well with the applications being considered compared to standard heuristics.
Influence maximization was proposed by . An adaptive version of influence maximization was first considered by . They showed that this objective function satisfies adaptive submodularity under the independent cascade model in general graphs. Influence maximization on a bipartite graph has been studied for applications to advertisement selection @cite_18 @cite_15 . This problem setting was extended to the adaptive setting by , but only the independent cascade model was considered. The curvature of its objective function was studied by .
{ "cite_N": [ "@cite_18", "@cite_15" ], "mid": [ "2010440649", "2161120928" ], "abstract": [ "Brands and agencies use marketing as a tool to influence customers. One of the major decisions in a marketing plan deals with the allocation of a given budget among media channels in order to maximize the impact on a set of potential customers. A similar situation occurs in a social network, where a marketing budget needs to be distributed among a set of potential influencers in a way that provides high-impact. We introduce several probabilistic models to capture the above scenarios. The common setting of these models consists of a bipartite graph of source and target nodes. The objective is to allocate a fixed budget among the source nodes to maximize the expected number of influenced target nodes. The concrete way in which source nodes influence target nodes depends on the underlying model. We primarily consider two models: a source-side influence model, in which a source node that is allocated a budget of k makes k independent trials to influence each of its neighboring target nodes, and a target-side influence model, in which a target node becomes influenced according to a specified rule that depends on the overall budget allocated to its neighbors. Our main results are an optimal (1-1 e)-approximation algorithm for the source-side model, and several inapproximability results for the target-side model, establishing that influence maximization in the latter model is provably harder.", "We consider the budget allocation problem over bipartite influence model proposed by (, 2012). This problem can be viewed as the well-known influence maximization problem with budget constraints. We first show that this problem and its much more general form fall into a general setting; namely the monotone submodular function maximization over integer lattice subject to a knapsack constraint. Our framework includes 's model, even with a competitor and with cost. We then give a (1 - 1 e)-approximation algorithm for this more general problem. Furthermore, when influence probabilities are nonincreasing, we obtain a faster (1 - 1 e)-approximation algorithm, which runs essentially in linear time in the number of nodes. This allows us to implement our algorithm up to almost 10M edges (indeed, our experiments tell us that we can implement our algorithm up to 1 billion edges. It would approximately take us only 500 seconds.)." ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
The latter, , self-expressiveness based methods have become dominant due to their elegant convex formulations and existence of theoretical analysis. The basic idea of subspace self-expressiveness is that one point can be represented in terms of a linear combination of other points from the same subspace. This leads to several advantages over other methods: (i) it is more robust to noise and outliers; (ii) the computational complexity of the self-expressiveness affinity does not grow exponentially with the number of subspaces and their dimensions; (iii) it also exploits the non-local information without the need of specifying the size of the neighborhood ( , the number of nearest neighbors as usually used for identifying locally linear subspaces @cite_12 @cite_25 ).
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "2004744336", "1950520880" ], "abstract": [ "We present a simple and fast geometric method for modeling data by a union of affine subspaces. The method begins by forming a collection of local best-fit affine subspaces, i.e., subspaces approximating the data in local neighborhoods. The correct sizes of the local neighborhoods are determined automatically by the Jones' β 2 numbers (we prove under certain geometric conditions that our method finds the optimal local neighborhoods). The collection of subspaces is further processed by a greedy selection procedure or a spectral method to generate the final model. We discuss applications to tracking-based motion segmentation and clustering of faces under different illuminating conditions. We give extensive experimental evidence demonstrating the state of the art accuracy and speed of the suggested algorithms on these problems and also on synthetic hybrid linear data as well as the MNIST handwritten digits data; and we demonstrate how to use our algorithms for fast determination of the number of affine subspaces.", "We cast the problem of motion segmentation of feature trajectories as linear manifold finding problems and propose a general framework for motion segmentation under affine projections which utilizes two properties of trajectory data: geometric constraint and locality. The geometric constraint states that the trajectories of the same motion lie in a low dimensional linear manifold and different motions result in different linear manifolds; locality, by which we mean in a transformed space a data and its neighbors tend to lie in the same linear manifold, provides a cue for efficient estimation of these manifolds. Our algorithm estimates a number of linear manifolds, whose dimensions are unknown beforehand, and segment the trajectories accordingly. It first transforms and normalizes the trajectories; secondly, for each trajectory it estimates a local linear manifold through local sampling; then it derives the affinity matrix based on principal subspace angles between these estimated linear manifolds; at last, spectral clustering is applied to the matrix and gives the segmentation result. Our algorithm is general without restriction on the number of linear manifolds and without prior knowledge of the dimensions of the linear manifolds. We demonstrate in our experiments that it can segment a wide range of motions including independent, articulated, rigid, non-rigid, degenerate, non-degenerate or any combination of them. In some highly challenging cases where other state-of-the-art motion segmentation algorithms may fail, our algorithm gives expected results." ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
Recently, Deep Subspace Clustering Networks (DSC-Net) @cite_30 are introduced to tackle the non-linearity arising in subspace clustering, where data is non-linearly mapped to a latent space with convolutional auto-encoders and a new self-expressive layer is introduced between the encoder and decoder to facilitate an end-to-end learning of the affinity matrix. Although DSC-Net outperforms traditional subspace clustering methods by large, their computational cost and memory footprint can become overwhelming even for mid-size problems.
{ "cite_N": [ "@cite_30" ], "mid": [ "2963365397" ], "abstract": [ "We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the \"self-expressiveness\" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods." ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
There are a few attempts to tackle the scalability of subspace clustering. The SSC-Orthogonal Matching Pursuit (SSC-OMP) @cite_15 replaces the large scale convex optimization procedure with the OMP algorithm to represent the affinity matrix. However, SSC-OMP sacrifices the clustering performance in favor of speeding up the computations, and it still may fail when the number of data points is very large. @math -Subspace Clustering Networks ( @math -SCN) @cite_33 is proposed to make subspace clustering applicable to large datasets. This is achieved via bypassing the construction of affinity matrix and consequently avoiding spectral clustering, and introducing the iterative method of @math -subspace clustering @cite_47 @cite_49 into a deep structure. Although @math -SCN develops two approaches to update the subspace and networks, it still shares the same drawbacks as iterative methods, for instance, it requires a good initialization, and seems fragile to outliers.
{ "cite_N": [ "@cite_15", "@cite_47", "@cite_33", "@cite_49" ], "mid": [ "2963840432", "1606778734", "2898932039", "" ], "abstract": [ "Subspace clustering methods based on l1, l2 or nuclear norm regularization have become very popular due to their simplicity, theoretical guarantees and empirical success. However, the choice of the regularizer can greatly impact both theory and practice. For instance, l1 regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad conditions (e.g., arbitrary subspaces and corrupted data). However, it requires solving a large scale convex optimization problem. On the other hand, l2 and nuclear norm regularization provide efficient closed form solutions, but require very strong assumptions to guarantee a subspace-preserving affinity, e.g., independent subspaces and uncorrupted data. In this paper we study a subspace clustering method based on orthogonal matching pursuit. We show that the method is both computationally efficient and guaranteed to give a subspace-preserving affinity under broad conditions. Experiments on synthetic data verify our theoretical analysis, and applications in handwritten digit and face clustering show that our approach achieves the best trade off between accuracy and efficiency. Moreover, our approach is the first one to handle 100,000 data points.", "Recently, Bradley and Mangasarian studied the problem of finding the nearest plane to m given points in ℝn in the least square sense. They showed that the problem reduces to finding the least eigenvalue and associated eigenvector of a certain n×n symmetric positive-semidefinite matrix. We extend this result to the general problem of finding the nearest q-flat to m points, with 0≤q≤n−1.", "Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.", "" ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
In learning theory, distinguishing outliers and noisy samples from clean ones to facilitate training is an active research topic. For example, Random Sample Consensus (RANSAC) @cite_40 is a classical and well-received algorithm for fitting a model to a cloud of points corrupted by noise. Employing RANSAC on subspaces @cite_55 in large-scale problems does not seem to be the right practice, as RANSAC requires a large number of iterations to achieve an acceptable fit.
{ "cite_N": [ "@cite_55", "@cite_40" ], "mid": [ "2121148353", "2085261163" ], "abstract": [ "We study the problem of estimating a mixed geometric model of multiple subspaces in the presence of a significant amount of outliers. The estimation of multiple subspaces is an important problem in computer vision, particularly for segmenting multiple motions in an image sequence. We first provide a comprehensive survey of robust statistical techniques in the literature, and identify three main approaches for detecting and rejecting outliers. Through a careful examination of these approaches, we propose and investigate three principled methods for robustly estimating mixed subspace models: random sample consensus, the influence function, and multivariate trimming. Using a benchmark synthetic experiment and a set of real image sequences, we conduct a thorough comparison of the three methods", "A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing" ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
Curriculum Learning @cite_35 begins learning a model from easy samples and gradually adapting the model to more complex ones, mimicking the cognitive process of humans. Ensemble Learning @cite_5 tries to improve the performance of machine learning algorithms by training different models and then to aggregate their predictions. Furthermore, distilling the knowledge learned from large deep learning models can be used to supervise a smaller model @cite_7 . Although Curriculum Learning, Ensemble Learning and distilling knowledge are notable methods, adopting them to work on problems with limited annotations, yet aside the unlabeled scenario, is far-from clear.
{ "cite_N": [ "@cite_35", "@cite_5", "@cite_7" ], "mid": [ "", "1534477342", "1821462560" ], "abstract": [ "", "Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1904.10596
2952228992
We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.
Many research papers have explored clustering with deep neural networks. Deep Embedded clustering (DEC) @cite_27 is one of the pioneers in this area, where the authors propose to pre-train a stacked auto-encoder (SAE) @cite_44 and fine-tune the encoder with a regularizer based on the student-t distribution to achieve cluster-friendly embeddings. On the downside, DEC is sensitive to the network structure and initialization. Various forms of Generative Adversarial Network (GAN) are employed for clustering such as Info-GAN @cite_58 and ClusterGAN @cite_52 , both of which intend to enforce the discriminative feature in the latent space to simultaneously generate and cluster images. The Deep Adaptive image Clustering (DAC) @cite_46 uses fully convolutional neural nets @cite_9 as initialization to perform self-supervised learning, and achieves remarkable results on various clustering benchmarks. However, sensitivity to the network structure seems again to be a concern for DAC.
{ "cite_N": [ "@cite_9", "@cite_52", "@cite_44", "@cite_27", "@cite_46", "@cite_58" ], "mid": [ "2123045220", "2890462169", "2110798204", "2964074409", "2779692282", "2963226019" ], "abstract": [ "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.", "Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.", "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "Image clustering is a crucial but challenging task in machine learning and computer vision. Existing methods often ignore the combination between feature learning and clustering. To tackle this problem, we propose Deep Adaptive Clustering (DAC) that recasts the clustering problem into a binary pairwise-classification framework to judge whether pairs of images belong to the same clusters. In DAC, the similarities are calculated as the cosine distance between label features of images which are generated by a deep convolutional network (ConvNet). By introducing a constraint into DAC, the learned label features tend to be one-hot vectors that can be utilized for clustering images. The main challenge is that the ground-truth similarities are unknown in image clustering. We handle this issue by presenting an alternating iterative Adaptive Learning algorithm where each iteration alternately selects labeled samples and trains the ConvNet. Conclusively, images are automatically clustered based on the label features. Experimental results show that DAC achieves state-of-the-art performance on five popular datasets, e.g., yielding 97.75 clustering accuracy on MNIST, 52.18 on CIFAR-10 and 46.99 on STL-10.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657." ] }
1904.10865
2942467457
A framework for higher gauge theory based on a 2-group is presented, by constructing a groupoid of connections on a manifold acted on by a 2-group of gauge transformations, following previous work by the authors where the general notion of the action of a 2-group on a category was defined. The connections are discretized, given by assignments of 2-group data to 1- and 2-cells coming from a given cell structure on the manifold, and likewise the gauge transformations are given by 2-group assignments to 0-cells. The 2-cells of the manifold are endowed with a bigon structure, matching the 2-dimensional algebra of squares which is used for calculating with 2-group data. Showing that the action of the 2-group of gauge transformations on the groupoid of connections is well-defined is the central result. The effect, on the groupoid of connections, of changing the discretization is studied, and partial results and conjectures are presented around this issue. The transformation double category that arises from the action of a 2-group on a category, as defined in previous work by the authors, is described for the case at hand, where it becomes a transtormation double groupoid. Finally, examples of the construction are given for simple choices of manifold: the circle, the 2-sphere and the torus.
First, the use of double groupoids for approaching higher gauge theory is very much to the fore in an article by Soncini and Zucchini @cite_17 , although the perspective there is rather different to ours. These authors describe HGT transports on a manifold @math endowed with connection 1- and 2-forms, in terms of double functors from a double groupoid of rectangles in @math to a double groupoid constructed from the chosen Lie crossed module. Gauge transformations are then given by double natural transformations and double modifications between these double functors.
{ "cite_N": [ "@cite_17" ], "mid": [ "1799080932" ], "abstract": [ "Abstract In this technical paper, we present a new formulation of higher parallel transport in strict higher gauge theory required for the rigorous construction of Wilson lines and surfaces. Our approach is based on an original notion of Lie crossed module cocycle and cocycle 1- and 2-gauge transformation with a non standard double category theoretic interpretation. We show its equivalence to earlier formulations." ] }
1904.10865
2942467457
A framework for higher gauge theory based on a 2-group is presented, by constructing a groupoid of connections on a manifold acted on by a 2-group of gauge transformations, following previous work by the authors where the general notion of the action of a 2-group on a category was defined. The connections are discretized, given by assignments of 2-group data to 1- and 2-cells coming from a given cell structure on the manifold, and likewise the gauge transformations are given by 2-group assignments to 0-cells. The 2-cells of the manifold are endowed with a bigon structure, matching the 2-dimensional algebra of squares which is used for calculating with 2-group data. Showing that the action of the 2-group of gauge transformations on the groupoid of connections is well-defined is the central result. The effect, on the groupoid of connections, of changing the discretization is studied, and partial results and conjectures are presented around this issue. The transformation double category that arises from the action of a 2-group on a category, as defined in previous work by the authors, is described for the case at hand, where it becomes a transtormation double groupoid. Finally, examples of the construction are given for simple choices of manifold: the circle, the 2-sphere and the torus.
A careful study of HGT along similar lines to ours was carried out by Bullivant, Calcada, Kádár, Faria Martins and Martin @cite_6 in the context of their investigation of topological phases of matter in 3+1 dimensions. They also discretize the higher connections using manifolds with an adapted cell structure, termed a 2-lattice structure. The 2-groups that they consider are finite, but this doesn't prevent a comparison with our approach without this restriction. They focus on transports along 2-disks and holonomies along 2-spheres, whereas our examples include also the circle and the torus. The gauge transformations in @cite_6 correspond to the morphisms in our category of connections and the objects of our 2-group of gauge transformations, as will be described in section below. For their purpose they do not need the higher level of gauge transformations which in our approach are given by the morphisms of the gauge 2-group, and this is the most significant difference between the two approaches. However there are also many similarities, and we will return to more detailed comparisons at appropriate points in the main text.
{ "cite_N": [ "@cite_6" ], "mid": [ "2587890630" ], "abstract": [ "Higher gauge theory is a higher order version of gauge theory that makes possible the definition of 2-dimensional holonomy along surfaces embedded in a manifold where a gauge 2-connection is present. In this paper, we will continue the study of Hamiltonian models for discrete higher gauge theory on a lattice decomposition of a manifold. In particular, we show that a previously proposed construction for higher lattice gauge theory is well-defined, including in particular a Hamiltonian for topological phases of matter in 3+1 dimensions. Our construction builds upon the Kitaev quantum double model, replacing the finite gauge connection with a finite gauge 2-group 2-connection. Our Hamiltonian higher lattice gauge theory model is defined on spatial manifolds of arbitrary dimension presented by slightly combinatorialised CW-decompositions (2-lattice decompositions), whose 1-cells and 2-cells carry discrete 1-dimensional and 2-dimensional holonomy data. We prove that the ground-state degeneracy of Hamiltonian higher lattice gauge theory is a topological invariant of manifolds, coinciding with the number of homotopy classes of maps from the manifold to the classifying space of the underlying gauge 2-group. The operators of our Hamiltonian model are closely related to discrete 2-dimensional holonomy operators for discretised 2-connections on manifolds with a 2-lattice decomposition. We therefore address the definition of discrete 2-dimensional holonomy for surfaces embedded in 2-lattices. Several results concerning the well-definedness of discrete 2-dimensional holonomy, and its construction in a combinatorial and algebraic topological setting are presented." ] }
1904.10865
2942467457
A framework for higher gauge theory based on a 2-group is presented, by constructing a groupoid of connections on a manifold acted on by a 2-group of gauge transformations, following previous work by the authors where the general notion of the action of a 2-group on a category was defined. The connections are discretized, given by assignments of 2-group data to 1- and 2-cells coming from a given cell structure on the manifold, and likewise the gauge transformations are given by 2-group assignments to 0-cells. The 2-cells of the manifold are endowed with a bigon structure, matching the 2-dimensional algebra of squares which is used for calculating with 2-group data. Showing that the action of the 2-group of gauge transformations on the groupoid of connections is well-defined is the central result. The effect, on the groupoid of connections, of changing the discretization is studied, and partial results and conjectures are presented around this issue. The transformation double category that arises from the action of a 2-group on a category, as defined in previous work by the authors, is described for the case at hand, where it becomes a transtormation double groupoid. Finally, examples of the construction are given for simple choices of manifold: the circle, the 2-sphere and the torus.
Finally, in work by one of us together with J. Nelson, see @cite_5 and references therein, models of quantum gravity in 2+1 dimensions are studied using traces of holonomies (Wilson loops). These exhibit area phases relating loops that are homotopic on the spatial surface, which strongly suggests an underlying HGT mechanism.
{ "cite_N": [ "@cite_5" ], "mid": [ "2097248522" ], "abstract": [ "Wilson observables for 2 + 1 quantum gravity with negative cosmological constant, when the spatial manifold is a torus, exhibit several novel features: signed area phases relate the observables assigned to homotopic loops, and their commutators describe loop intersections, with properties that are not yet fully understood. We describe progress in our study of this bracket, which can be interpreted as a q-deformed Goldman bracket, and provide a geometrical interpretation in terms of a quantum version of Pick’s formula for the area of a polygon with integer vertices." ] }
1904.10425
2940638744
Let @math be i.i.d. Rademacher random variables taking values @math with probability @math each. Given an integer vector @math , its concentration probability is the quantity @math . The Littlewood-Offord problem asks for bounds on @math under various hypotheses on @math , whereas the inverse Littlewood-Offord problem, posed by Tao and Vu, asks for a characterization of all vectors @math for which @math is large. In this paper, we study the associated counting problem: How many integer vectors @math belonging to a specified set have large @math ? The motivation for our study is that in typical applications, the inverse Littlewood-Offord theorems are only used to obtain such counting estimates. Using a more direct approach, we obtain significantly better bounds for this problem than those obtained using the inverse Littlewood--Offord theorems of Tao and Vu and of Nguyen and Vu. Moreover, we develop a framework for deriving upper bounds on the probability of singularity of random discrete matrices that utilizes our counting result. To illustrate the methods, we present the first exponential-type' (i.e., @math for some positive constant @math ) upper bounds on the singularity probability for the following two models: (i) adjacency matrices of dense signed random regular digraphs, for which the previous best known bound is @math due to Cook; and (ii) dense row-regular @math -matrices, for which the previous best known bound is @math for any constant @math due to Nguyen.
The methods we use in this paper can be further developed in various directions. In a recent work @cite_17 , the first two named authors utilized and extended some of the ideas introduced here in order to provide the best known upper bound for the well studied problem of estimating the singularity probability of random @math -valued matrices, and in upcoming work @cite_18 , the second named author uses some of the results in this paper to study the non-asymptotic behavior of the least singular value of different models of discrete random matrices. In another upcoming work of the second named author @cite_8 , it is shown how to extend the techniques introduced here and in @cite_18 to study not-necessarily-discrete models of random matrices. We also anticipate that the techniques presented here (along with some additional combinatorial ideas) should suffice to provide an exponential-type' upper bound on the probability of singularity of the adjacency matrix of a dense random regular digraph, thereby making substantial progress towards a conjecture of Cook [Conjecture 1.7] cook2017singularity .
{ "cite_N": [ "@cite_8", "@cite_18", "@cite_17" ], "mid": [ "2941806982", "2941595856", "2890959015" ], "abstract": [ "This paper makes two contributions to the areas of anti-concentration and non-asymptotic random matrix theory. First, we study the counting problem in inverse Littlewood-Offord theory for general random variables: for random variables @math which are i.i.d. copies of a random variable @math (satisfying some mild hypotheses), how many integer vectors @math in a prescribed box have large @math ? Building on recent work of Ferber, Jain, Luh, and Samotij (who treated the case when @math is a Rademacher random variable), we provide significantly better bounds for this problem than those obtained using the inverse Littlewood-Offord theorems of Tao and Vu, and Nguyen and Vu. Next, we study the non-asymptotic behavior of the least singular value @math of a random @math square matrix @math with i.i.d. entries, which are only assumed to be centered with variance @math . As an application of our counting theorem, and utilizing and developing a recent work by the author, we show that for all @math , @math , where @math is an absolute constant, thereby providing a considerably simpler proof of an approximate version of an essentially optimal result due to Rebrova and Tikhomirov.", "An approximate Spielman-Teng theorem for the least singular value @math of a random @math square matrix @math is a statement of the following form: there exist constants @math such that for all @math , @math . The goal of this paper is to develop a simple and novel framework for proving such results for discrete random matrices. As an application, we prove an approximate Spielman-Teng theorem for @math -valued matrices, each of whose rows is an independent vector with exactly @math zero components. This improves on previous work of Nguyen and Vu, and is the first such result in a truly combinatorial' setting.", "Let @math denote a random symmetric @math matrix whose upper diagonal entries are independent and identically distributed Bernoulli random variables (which take values @math and @math with probability @math each). It is widely conjectured that @math is singular with probability at most @math . On the other hand, the best known upper bound on the singularity probability of @math , due to Vershynin (2011), is @math , for some unspecified small constant @math . This improves on a polynomial singularity bound due to Costello, Tao, and Vu (2005), and a bound of Nguyen (2011) showing that the singularity probability decays faster than any polynomial. In this paper, improving on all previous results, we show that the probability of singularity of @math is at most @math for all sufficiently large @math . The proof utilizes and extends a novel combinatorial approach to discrete random matrix theory, which has been recently introduced by the authors together with Luh and Samotij." ] }
1904.10300
2941212376
We investigate the direction of training a 3D object detector for new object classes from only 2D bounding box labels of these new classes, while simultaneously transferring information from 3D bounding box labels of the existing classes. To this end, we propose a transferable semi-supervised 3D object detection model that learns a 3D object detector network from training data with two disjoint sets of object classes - a set of strong classes with both 2D and 3D box labels, and another set of weak classes with only 2D box labels. In particular, we suggest a relaxed reprojection loss, box prior loss and a Box-to-Point Cloud Fit network that allow us to effectively transfer useful 3D information from the strong classes to the weak classes during training, and consequently, enable the network to detect 3D objects in the weak classes during inference. Experimental results show that our proposed algorithm outperforms baseline approaches and achieves promising results compared to fully-supervised approaches on the SUN-RGBD and KITTI datasets. Furthermore, we show that our Box-to-Point Cloud Fit network improves performances of the fully-supervised approaches on both datasets.
There is growing interest in weakly- and semi-supervised learning in many problem areas @cite_1 @cite_19 @cite_16 @cite_20 @cite_34 @cite_44 @cite_5 @cite_14 because it is tedious and labor intensive to label large amounts of data for fully supervised deep learning. There is a wide literature of weakly-supervised @cite_29 @cite_7 @cite_39 @cite_36 and semi-supervised @cite_8 @cite_3 @cite_13 @cite_31 learning approaches for semantic segmentation. Both strong and weak labels on the inference classes are required in conventional semi-supervised learning. Hence, the approach remains expensive for applications with new classes. @cite_17 @cite_25 @cite_42 proposed the cross-category semi-supervised semantic segmentation which is a more general approach of semi-supervised segmentation, where the model is able to learn from strong labels provided for classes outside of the inference classes. They outperformed weakly- and semi-supervised methods, and showed on-par performances with fully-supervised methods. Inspired by the effectiveness of the transferred knowledge, we propose to tackle the same problem in the 3D object detection domain.
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_42", "@cite_3", "@cite_44", "@cite_5", "@cite_20", "@cite_8", "@cite_39", "@cite_17", "@cite_7", "@cite_19", "@cite_16", "@cite_34", "@cite_25", "@cite_14", "@cite_1", "@cite_31", "@cite_13" ], "mid": [ "2963311325", "2951358285", "2768923469", "1927251054", "", "", "", "2221898772", "2755136704", "2769238084", "2952004933", "", "", "", "2257483379", "", "2798314605", "2886773299", "2949847866" ], "abstract": [ "Weakly supervised instance segmentation with image-level labels, instead of expensive pixel-level masks, remains unexplored. In this paper, we tackle this challenging problem by exploiting class peak responses to enable a classification network for instance mask extraction. With image labels supervision only, CNN classifiers in a fully convolutional manner can produce class response maps, which specify classification confidence at each image location. We observed that local maximums, i.e., peaks, in a class response map typically correspond to strong visual cues residing inside each instance. Motivated by this, we first design a process to stimulate peaks to emerge from a class response map. The emerged peaks are then back-propagated and effectively mapped to highly informative regions of each object instance, such as instance boundaries. We refer to the above maps generated from class peak responses as Peak Response Maps (PRMs). PRMs provide a fine-detailed instance-level representation, which allows instance masks to be extracted even with some off-the-shelf methods. To the best of our knowledge, we for the first time report results for the challenging image-level supervised instance segmentation task. Extensive experiments show that our method also boosts weakly supervised pointwise localization as well as semantic segmentation performance, and reports state-of-the-art results on popular benchmarks, including PASCAL VOC 2012 and MS COCO.", "We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.", "Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to 100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world.", "Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision - image level tags, bounding boxes, and partial labels - to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12 on per-class accuracy, while maintaining comparable per-pixel accuracy.", "", "", "", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.", "Large-scale training for semantic segmentation is challenging due to the expense of obtaining training data for this task relative to other vision tasks. We propose a novel training approach to address this difficulty. Given cheaply-obtained sparse image labelings, we propagate the sparse labels to produce guessed dense labelings. A standard CNN-based segmentation network is trained to mimic these labelings. The label-propagation process is defined via random-walk hitting probabilities, which leads to a differentiable parameterization with uncertainty estimates that are incorporated into our loss. We show that by learning the label-propagator jointly with the segmentation predictor, we are able to effectively learn semantic edges given no direct edge supervision. Experiments also show that training a segmentation network in this way outperforms the naive approach.", "The performance of deep learning based semantic segmentation models heavily depends on sufficient data with careful annotations. However, even the largest public datasets only provide samples with pixel-level annotations for rather limited semantic categories. Such data scarcity critically limits scalability and applicability of semantic segmentation models in real applications. In this paper, we propose a novel transferable semi-supervised semantic segmentation model that can transfer the learned segmentation knowledge from a few strong categories with pixel-level annotations to unseen weak categories with only image-level annotations, significantly broadening the applicable territory of deep segmentation models. In particular, the proposed model consists of two complementary and learnable components: a Label transfer Network (L-Net) and a Prediction transfer Network (P-Net). The L-Net learns to transfer the segmentation knowledge from strong categories to the images in the weak categories and produces coarse pixel-level semantic maps, by effectively exploiting the similar appearance shared across categories. Meanwhile, the P-Net tailors the transferred knowledge through a carefully designed adversarial learning strategy and produces refined segmentation results with better details. Integrating the L-Net and P-Net achieves 96.5 and 89.4 performance of the fully-supervised baseline using 50 and 0 categories with pixel-level annotations respectively on PASCAL VOC 2012. With such a novel transfer mechanism, our proposed model is easily generalizable to a variety of new categories, only requiring image-level annotations, and offers appealing scalability in real applications.", "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.", "", "", "", "We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in images using an attention model, and subsequently performs binary segmentation for each highlighted region using decoder. Combining attention model, the decoder trained with segmentation annotations in different categories boosts accuracy of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-theart weakly-supervised techniques in PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset.", "", "3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet [5] and KITTI [18], we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet [49], we additionally show that the approach is able to generalize to other object categories as well.", "We present a weakly supervised model that jointly performs both semantic- and instance-segmentation – a particularly relevant problem given the substantial cost of obtaining pixel-perfect annotation for these tasks. In contrast to many popular instance segmentation approaches based on object detectors, our method does not predict any overlapping instances. Moreover, we are able to segment both “thing” and “stuff” classes, and thus explain all the pixels in the image. “Thing” classes are weakly-supervised with bounding boxes, and “stuff” with image-level tags. We obtain state-of-the-art results on Pascal VOC, for both full and weak supervision (which achieves about 95 of fully-supervised performance). Furthermore, we present the first weakly-supervised results on Cityscapes for both semantic- and instance-segmentation. Finally, we use our weakly supervised framework to analyse the relationship between annotation quality and predictive performance, which is of interest to dataset creators.", "We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset." ] }
1904.10509
2940744433
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to @math . We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.
The most related work involves other techniques for scaling up autoregressive generative models. For images, @cite_6 models conditional independence between the pixels in order to generate many locations in parallel, and @cite_24 imposes an ordering and multi-scale upsampling procedure to generate high fidelity samples. @cite_18 uses blocks of local attention to apply Transformers to images. For text, @cite_13 introduces a state reuse "memory" for modeling long-term dependencies. And for audio, in addition to @cite_2 , @cite_8 used a hierarchical structure and RNNs of varying clock-rates to use long contexts during inference, similar to @cite_16 . @cite_22 apply Transformers to MIDI generation with an efficient relative attention.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_8", "@cite_6", "@cite_24", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2950739196", "2891815651", "2953331651", "2594961016", "2950946978", "2519091744", "2952276042", "2906625520" ], "abstract": [ "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "", "In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.", "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.", "The unconditional generation of high fidelity images is a longstanding benchmark for testing the performance of image decoders. Autoregressive image models have been able to generate small images unconditionally, but the extension of these methods to large images where fidelity can be more readily assessed has remained an open problem. Among the major challenges are the capacity to encode the vast previous context and the sheer difficulty of learning a distribution that preserves both global semantic coherence and exactness of detail. To address the former challenge, we propose the Subscale Pixel Network (SPN), a conditional decoder architecture that generates an image as a sequence of sub-images of equal size. The SPN compactly captures image-wide spatial dependencies and requires a fraction of the memory and the computation required by other fully autoregressive models. To address the latter challenge, we propose to use Multidimensional Upscaling to grow an image in both size and depth via intermediate stages utilising distinct SPNs. We evaluate SPNs on the unconditional generation of CelebAHQ of size 256 and of ImageNet from size 32 to 256. We achieve state-of-the-art likelihood results in multiple settings, set up new benchmark results in previously unexplored settings and are able to generate very high fidelity large scale samples on the basis of both datasets.", "", "Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.", "" ] }
1904.10509
2940744433
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to @math . We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.
Outside of generative modeling, there are several works relevant to improving the efficiency of attention based off chunking @cite_7 or using fixed length representations @cite_11 . Other works have investigated attention with multiple "hops", such as @cite_20 and @cite_26 .
{ "cite_N": [ "@cite_20", "@cite_26", "@cite_7", "@cite_11" ], "mid": [ "2951008357", "2613904329", "2773781902", "2724346673" ], "abstract": [ "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed. We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time. When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism. In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.", "The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20 for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments." ] }
1904.10294
2942493509
As a fundamental problem of natural language processing, it is important to measure the distance between different documents. Among the existing methods, the Word Mover's Distance (WMD) has shown remarkable success in document semantic matching for its clear physical insight as a parameter-free model. However, WMD is essentially based on the classical Wasserstein metric, thus it often fails to robustly represent the semantic similarity between texts of different lengths. In this paper, we apply the newly developed Wasserstein-Fisher-Rao (WFR) metric from unbalanced optimal transport theory to measure the distance between different documents. The proposed WFR document distance maintains the great interpretability and simplicity as WMD. We demonstrate that the WFR document distance has significant advantages when comparing the texts of different lengths. In addition, an accelerated Sinkhorn based algorithm with GPU implementation has been developed for the fast computation of WFR distances. The KNN classification results on eight datasets have shown its clear improvement over WMD.
(a) Representation of documents. There have been many ways for documents representation. Latent Semantic Indexing @cite_4 and Latent Dirichlet Allocation @cite_31 are based on inferred latent variables generated by the graphical model. However, most of those models are lack of the semantic information in word embedding space @cite_18 . Stack denoising auto encoders @cite_11 , Doc2Vec @cite_19 and skip-thoughts @cite_26 are neural network based similarities. Despite their numerical success, those models are difficult to explain, and the performance always relies on the training samples.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_19", "@cite_31", "@cite_11" ], "mid": [ "2153579005", "", "2147152072", "2131744502", "1880262756", "22861983" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains." ] }
1904.10294
2942493509
As a fundamental problem of natural language processing, it is important to measure the distance between different documents. Among the existing methods, the Word Mover's Distance (WMD) has shown remarkable success in document semantic matching for its clear physical insight as a parameter-free model. However, WMD is essentially based on the classical Wasserstein metric, thus it often fails to robustly represent the semantic similarity between texts of different lengths. In this paper, we apply the newly developed Wasserstein-Fisher-Rao (WFR) metric from unbalanced optimal transport theory to measure the distance between different documents. The proposed WFR document distance maintains the great interpretability and simplicity as WMD. We demonstrate that the WFR document distance has significant advantages when comparing the texts of different lengths. In addition, an accelerated Sinkhorn based algorithm with GPU implementation has been developed for the fast computation of WFR distances. The KNN classification results on eight datasets have shown its clear improvement over WMD.
Recently, WMD @cite_0 is proposed as an implicit document representation. By considering each document as a set of words in the word embedding space, it defines the minimal transportation cost as the distance between two documents. This metric is interpretable with the consideration of semantic movements. Many other metric learning models are inspired by the metric property of WMD. S-WMD @cite_30 employed the derivative of WMD to optimize the parameterized transformation in word embedding space and histogram importance vector. Word Mover's Embedding @cite_16 designed a kernel method on WMD metric space. However, those methods are still more or less suffer from the overestimation issue. They do not have the document-specific re-weight mechanism as WFR Document Distance.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_30" ], "mid": [ "658020064", "2786148476", "2556873163" ], "abstract": [ "We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to \"travel\" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates.", "Learning effective text representations is a key foundation for numerous machine learning and NLP applications. While the celebrated Word2Vec technique yields semantically rich word representations, it is less clear whether sentence or document representations should be built upon word representations or from scratch. Recent work has demonstrated that a distance measure between documents called (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is very expensive to compute, and is harder to apply beyond simple KNN than feature embeddings. In this paper, we propose the (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. Our technique extends the theory of to show convergence of the inner product between WMEs to a positive-definite kernel that can be interpreted as a soft version of (inverse) WMD. The proposed embedding is more efficient and flexible than WMD in many situations. As an example, WME with a simple linear classifier reduces the computational cost of WMD-based KNN in document length and in number of samples, while simultaneously improving accuracy. In experiments on 9 benchmark text classification datasets and 22 textual similarity tasks the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.", "Recently, a new document metric called the word mover's distance (WMD) has been proposed with unprecedented results on kNN-based document classification. The WMD elevates high-quality word embeddings to a document metric by formulating the distance between two documents as an optimal transport problem between the embedded words. However, the document distances are entirely un-supervised and lack a mechanism to incorporate supervision when available. In this paper we propose an efficient technique to learn a supervised metric, which we call the Supervised-WMD (S-WMD) metric. The supervised training minimizes the stochastic leave-one-out nearest neighbor classification error on a per-document level by updating an affine transformation of the underlying word embedding space and a word-imporance weight vector. As the gradient of the original WMD distance would result in an inefficient nested optimization problem, we provide an arbitrarily close approximation that results in a practical and efficient update rule. We evaluate S-WMD on eight real-world text classification tasks on which it consistently outperforms almost all of our 26 competitive baselines." ] }
1904.10294
2942493509
As a fundamental problem of natural language processing, it is important to measure the distance between different documents. Among the existing methods, the Word Mover's Distance (WMD) has shown remarkable success in document semantic matching for its clear physical insight as a parameter-free model. However, WMD is essentially based on the classical Wasserstein metric, thus it often fails to robustly represent the semantic similarity between texts of different lengths. In this paper, we apply the newly developed Wasserstein-Fisher-Rao (WFR) metric from unbalanced optimal transport theory to measure the distance between different documents. The proposed WFR document distance maintains the great interpretability and simplicity as WMD. We demonstrate that the WFR document distance has significant advantages when comparing the texts of different lengths. In addition, an accelerated Sinkhorn based algorithm with GPU implementation has been developed for the fast computation of WFR distances. The KNN classification results on eight datasets have shown its clear improvement over WMD.
(b) (Un)balanced optimal transport. Optimal transport (OT) has been one of the hottest topics of applied mathematics in the past few years. It is also closely related to some subjects in pure mathematics such as geometric analysis @cite_10 @cite_6 and non-linear partial differential equations @cite_15 @cite_12 . As the most fundamental and important object of OT, Wasserstein metric can be applied to measure the similarity of two probability distributions. The objective functions defined by this metric are usually convex, insensitive to noise, and can be effectively computed. Thus, Wasserstein metric has been deeply exploited by many researchers and has been successfully applied to machine learning @cite_33 , image processing @cite_21 and computer graphics @cite_29 .
{ "cite_N": [ "@cite_33", "@cite_29", "@cite_21", "@cite_6", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2739748921", "2009172320", "2744110160", "1799201424", "2019270511", "2050480234", "" ], "abstract": [ "", "This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. Our main contribution is to show that optimal transportation can be made tractable over large domains used in graphics, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work. To this end, we approximate optimal transportation distances using entropic regularization. The resulting objective contains a geodesic distance-based kernel that can be approximated with the heat kernel. This approach leads to simple iterative numerical schemes with linear convergence, in which each iteration only requires Gaussian convolution or the solution of a sparse, pre-factored linear system. We demonstrate the versatility and efficiency of our method on tasks including reflectance interpolation, color transfer, and geometry processing.", "This article introduces a new non-linear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our method simultaneously estimates such atoms, and, for each datapoint, the vector of weights that can optimally reconstruct it as an optimal transport barycenter of such atoms. Our method is computationally tractable thanks to the addition of an entropic regularization to the usual optimal transportation problem, leading to an approximation scheme that is efficient, parallel and simple to differentiate. Both atoms and weights are learned using a gradient-based descent method. Gradients are obtained by automatic differentiation of the generalized Sinkhorn iterations that yield barycenters with entropic smoothing. Because of its formulation relying on Wasserstein barycenters instead of the usual matrix product between dictionary and codes, our method allows for non-linear relationships between atoms and the reconstruction of input data. We illustrate its application in several different image processing settings.", "In this paper, we develop several related finite dimensional variational principles for discrete optimal transport (DOT), Minkowski type problems for convex polytopes and discrete Monge-Ampere equation (DMAE). A link between the discrete optimal transport, discrete Monge-Ampere equation and the power diagram in computational geometry is established.", "The elliptic Monge-Ampere equation is a fully nonlinear partial differential equation that originated in geometric surface theory and has been applied in dynamic meteorology, elasticity, geometric optics, image processing, and image registration. Solutions can be singular, in which case standard numerical approaches fail. Novel solution methods are required for stability and convergence to the weak (viscosity) solution. In this article we build a wide stencil finite difference discretization for the Monge-Ampere equation. The scheme is monotone, so the Barles-Souganidis theory allows us to prove that the solution of the scheme converges to the unique viscosity solution of the equation. Solutions of the scheme are found using a damped Newton's method. We prove convergence of Newton's method and provide a systematic method to determine a starting point for the Newton iteration. Computational results are presented in two and three dimensions, which demonstrates the speed and accuracy of the method on a number of exact solutions, which range in regularity from smooth to nondifferentiable.", "The potential function of the optimal transportation problem satisfies a partial differential equation of Monge-Ampere type. In this paper we introduce the notion of a generalized solution, and prove the existence and uniqueness of generalized solutions of the problem. We also prove the solution is smooth under certain structural conditions on the cost function.", "" ] }
1904.10294
2942493509
As a fundamental problem of natural language processing, it is important to measure the distance between different documents. Among the existing methods, the Word Mover's Distance (WMD) has shown remarkable success in document semantic matching for its clear physical insight as a parameter-free model. However, WMD is essentially based on the classical Wasserstein metric, thus it often fails to robustly represent the semantic similarity between texts of different lengths. In this paper, we apply the newly developed Wasserstein-Fisher-Rao (WFR) metric from unbalanced optimal transport theory to measure the distance between different documents. The proposed WFR document distance maintains the great interpretability and simplicity as WMD. We demonstrate that the WFR document distance has significant advantages when comparing the texts of different lengths. In addition, an accelerated Sinkhorn based algorithm with GPU implementation has been developed for the fast computation of WFR distances. The KNN classification results on eight datasets have shown its clear improvement over WMD.
(c) Fast calculation of (un)balanced optimal transport. Sinkhorn algorithm @cite_27 solves the entropy regularized optimal transport problems. By reducing the entropy regularization term, the solution of each Sinkhorn iteration approximates to that of the original OT problem. A greedy coordinate descent version of Sinkhorn iteration @cite_23 called Greenkhorn is proposed to improve the convergence property. Recently, Sinkhorn algorithm is applied to solve the unbalanced optimal transport problem @cite_20 with modification on log-domain stabilization. In the case of document classification, an approximated solution of WFR is sufficient to serve as a good metric for documents. We accelerate the Sinkhorn algorithm by GPUs to obtain a large amount of WFR simultaneously. Furthermore, as the dual problem of each sinkhorn iteration is computationally cheap and provides the lower bound of WFR Document Distance distance, we can further accelerate the KNN by introducing a prune strategy.
{ "cite_N": [ "@cite_27", "@cite_20", "@cite_23" ], "mid": [ "2158131535", "2724892359", "2964308809" ], "abstract": [ "Optimal transport distances are a fundamental family of distances for probability measures and histograms of features. Despite their appealing theoretical properties, excellent performance in retrieval tasks and intuitive formulation, their computation involves the resolution of a linear program whose cost can quickly become prohibitive whenever the size of the support of these measures or the histograms' dimension exceeds a few hundred. We propose in this work a new family of optimal transport distances that look at transport problems from a maximum-entropy perspective. We smooth the classic optimal transport problem with an entropic regularization term, and show that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers. We also show that this regularized distance improves upon classic optimal transport distances on the MNIST classification problem.", "", "Computing optimal transport distances such as the earth mover's distance is a fundamental problem in machine learning, statistics, and computer vision. Despite the recent introduction of several algorithms with good empirical performance, it is unknown whether general optimal transport distances can be approximated in near-linear time. This paper demonstrates that this ambitious goal is in fact achieved by Cuturi's Sinkhorn Distances. This result relies on a new analysis of Sinkhorn iterations, which also directly suggests a new greedy coordinate descent algorithm Greenkhorn with the same theoretical guarantees. Numerical simulations illustrate that Greenkhorn significantly outperforms the classical Sinkhorn algorithm in practice." ] }
1904.09970
2938124742
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.
Towards the goal of concise representations, researchers exploited constructive solid geometry (CSG) @cite_5 for shape modeling @cite_21 @cite_25 . Sharma al @cite_21 leverage an encoder-decoder architecture to generate a sequence of simple boolean operations to act on a set of primitives that can be either squares, circles or triangles. In a similar line of work, Ellis al @cite_2 learn a programmatic representation of a hand-written drawing, by first extracting simple primitives, such as lines, circles and rectangles and a set of drawing commands that is used to synthesize a program. In contrast to @cite_21 @cite_25 , our goal is not to obtain accurate 3D geometry by iteratively applying boolean operations on shapes. Instead, we aim to decompose the depicted object into a parsimonious interpretable representation where each part has a semantic meaning associated with it. Besides, we do not suffer from ambiguities of an iterative construction process, where different executions lead to the same result.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_25", "@cite_2" ], "mid": [ "2013517877", "2964341242", "", "2963102152" ], "abstract": [ "Constructive Solid Geometry (CSG) is a powerful way of describing solid objects for computer graphics and modeling. The surfaces of any primitive object (such as a cube, sphere or cylinder) can be approximated by polygons. Being abile to find the union, intersection or difference of these objects allows more interesting and complicated polygonal objects to be created. The algorithm presented here performs these set operations on objects constructed from convex polygons. These objects must bound a finite volume, but need not be convex. An object that results from one of these operations also contains only convex polygons, and bounds a finite volume; thus, it can be used in later combinations, allowing the generation of quite complicated objects. Our algorithm is robust and is presented in enough detail to be implemented.", "We present a neural architecture that takes as input a 2D or 3D shape and outputs a program that generates the shape. The instructions in our program are based on constructive solid geometry principles, i.e., a set of boolean operations on shape primitives defined recursively. Bottom-up techniques for this shape parsing task rely on primitive detection and are inherently slow since the search space over possible primitive combinations is large. In contrast, our model uses a recurrent neural network that parses the input shape in a top-down manner, which is significantly faster and yields a compact and easy-to-interpret sequence of modeling instructions. Our model is also more effective as a shape detector compared to existing state-of-the-art detection techniques. We finally demonstrate that our network can be trained on novel datasets without ground-truth program annotations through policy gradient techniques.", "", "We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of . The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input." ] }
1904.09970
2938124742
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.
In contrast, our approach is unsupervised and does not suffer from ambiguities caused by different possible prediction sequences that lead to the same cuboid assembly. Furthermore, @cite_41 @cite_30 exploit simple cuboid representations which do not capture more complex shapes that are common in natural and man-made scenes ( , curved objects, spheres). In this work, we propose to use superquadrics @cite_28 which yield a more diverse shape vocabulary and hence lead to more expressive scene abstractions as illustrated in fig:teaser .
{ "cite_N": [ "@cite_41", "@cite_28", "@cite_30" ], "mid": [ "2963400238", "2023751161", "2742468459" ], "abstract": [ "We propose to recover 3D shape structures from single RGB images, where structure refers to shape parts represented by cuboids and part relations encompassing connectivity and symmetry. Given a single 2D image with an object depicted, our goal is automatically recover a cuboid structure of the object parts as well as their mutual relations. We develop a convolutional-recursive auto-encoder comprised of structure parsing of a 2D image followed by structure recovering of a cuboid hierarchy. The encoder is achieved by a multi-scale convolutional network trained with the task of shape contour estimation, thereby learning to discern object structures in various forms and scales. The decoder fuses the features of the structure parsing network and the original image, and recursively decodes a hierarchy of cuboids. Since the decoder network is learned to recover part relations including connectivity and symmetry explicitly, the plausibility and generality of part structure recovery can be ensured. The two networks are jointly trained using the training data of contour-mask and cuboid-structure pairs. Such pairs are generated by rendering stock 3D CAD models coming with part segmentation. Our method achieves unprecedentedly faithful and detailed recovery of diverse 3D part structures from single-view 2D images. We demonstrate two applications of our method including structure-guided completion of 3D volumes reconstructed from single-view images and structure-aware interactive editing of 2D images.", "A new and powerful family of parametric shapes extends the basic quadric surfaces and solids, yielding a variety of useful forms.", "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3D-PRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxel-based generative models while using a significantly reduced parameter space." ] }
1904.09970
2938124742
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.
A primary inspiration for this paper is the seminal work by Tulsiani al @cite_22 , who proposed a method for 3D shape abstraction using a non-iterative approach which does not require supervision. Instead, they use a convolutional network architecture for predicting the shape and pose parameters of 3D cuboids as well as their probability of existence. They demonstrate that learning shape abstraction from data allows for obtaining consistent parses across different instances in an unsupervised fashion.
{ "cite_N": [ "@cite_22" ], "mid": [ "2949896890" ], "abstract": [ "We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation." ] }
1904.09970
2938124742
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.
In this paper, we extend the model of Tulsiani al @cite_22 in the following directions. First, we utilize superquadrics, instead of cuboids, which leads to more accurate scene abstractions. Second, we demonstrate that the bi-directional Chamfer distance is tractable and doesn't require reinforcement learning @cite_4 or specification of rewards @cite_22 . In particular, we show that there exists an analytical closed-form solution which can be evaluated in linear time. This allows us to compute gradients the model parameters using standard error propagation @cite_7 which facilitates learning. In addition, we add a new simple parsimony loss to favor configurations with a small number of primitives.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7" ], "mid": [ "2119717200", "2949896890", "1498436455" ], "abstract": [ "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation.", "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1." ] }
1904.09970
2938124742
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.
Superquadrics are a parametric family of surfaces that can be used to describe cubes, cylinders, spheres, octahedra, ellipsoids @cite_28 . In contrast to geons @cite_44 , superquadric surfaces can be described using a fairly simple parameterization. In contrast to generalized cylinders @cite_44 , superquadrics are able to represent a larger variety of shapes. See fig:superquadrics for an illustration of the shape space.
{ "cite_N": [ "@cite_28", "@cite_44" ], "mid": [ "2023751161", "2125756925" ], "abstract": [ "A new and powerful family of parametric shapes extends the basic quadric surfaces and solids, yielding a variety of useful forms.", "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into simple volumetric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of components [N probably ≤ 36] can be derived from contrasts of five readily detectable properties of edges in a 2-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position and image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. A principle of componential recovery can account for the major phenomena of object recognition: If an arrangement of two or three primitive components can be recovered from the input, objects can be quickly recognized even when they are occluded, rotated in depth, novel, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory." ] }
1904.10112
2941661981
Previous studies on stochastic primal-dual algorithms for solving min-max problems with faster convergence heavily rely on the bilinear structure of the problem, which restricts their applicability to a narrowed range of problems. The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed. Faster convergence rates than @math with @math being the number of stochastic gradient updates are established under some mild conditions of involved functions on the primal and the dual variable. For example, for a family of problems that enjoy a weak strong convexity in terms of the primal variable and has a strongly concave function of the dual variable, the convergence rate of the proposed algorithm is @math . We also investigate the effectiveness of the proposed algorithms for learning robust models and empirical AUC maximization.
Stochastic primal-dual gradient method and its variant were first analyzed by @cite_1 for solving a more general problem @math . Under the standard bounded stochastic (sub)-gradient assumption, a convergence rate of @math was established for a primal-dual gap, which implies a convergence rate of @math for minimizing the primal objective @math . Later, there are couple of studies that aim to strengthen this convergence rate by leveraging the smoothness of @math or the involved function when there is a special structure of the objective function @cite_14 @cite_6 @cite_8 . However, the worst-case convergence rate of these later algorithms is still dominated by @math . Without smoothness assumption on @math or a bilinear structure, these later algorithms are not directly applicable to solving ). In addition, Frank Wolfe algorithms are analyzed for saddle point problems in @cite_30 , which could also achieve a convergence rate of @math in terms of primal-dual gap under the smoothness condition.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_8", "@cite_1", "@cite_6" ], "mid": [ "2543937159", "", "2082451773", "1992208280", "2092851214" ], "abstract": [ "We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained smooth convex-concave saddle point (SP) problems. Remarkably, the method only requires access to linear minimization oracles. Leveraging recent advances in FW optimization, we provide the first proof of convergence of a FW-type saddle point solver over polytopes, thereby partially answering a 30 year-old conjecture. We also survey other convergence results and highlight gaps in the theoretical underpinnings of FW-style algorithms. Motivating applications without known efficient alternatives are explored through structured prediction with combinatorial penalties as well as games over matching polytopes involving an exponential number of constraints.", "", "Originated from the practical implementation and numerical considerations of iterative methods for solving mathematical programs, the study of error bounds has grown and proliferated in many interesting areas within mathematical programming. This paper gives a comprehensive, state-of-the-art survey of the extensive theory and rich applications of error bounds for inequality and optimization systems and solution sets of equilibrium problems.", "In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.", "We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs). The basic idea of this algorithm is to incorporate a multistep acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence as Nesterov's smoothing technique. Our stochastic APD method exhibits an optimal rate of convergence for stochastic SPP not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters. To the best of our knowledge, this is the first time that such an optimal algorithm has been developed for stochastic SPP in the literature. Furthermore, for both deterministic and stochastic SPP, the developed APD algorithms can deal with the situation when the feasible region is unbounded, as long as a saddle point exists. In the unbounded case, we incorporate the modified termination criterion introduced b..." ] }
1904.10045
2941602825
Connectionist Temporal Classification (CTC) based end-to-end speech recognition system usually need to incorporate an external language model by using WFST-based decoding in order to achieve promising results. This is more essential to Mandarin speech recognition since it owns a special phenomenon, namely homophone, which causes a lot of substitution errors. The linguistic information introduced by language model will help to distinguish these substitution errors. In this work, we propose a transformer based spelling correction model to automatically correct errors especially the substitution errors made by CTC-based Mandarin speech recognition system. Specifically, we investigate using the recognition results generated by CTC-based systems as input and the ground-truth transcriptions as output to train a transformer with encoder-decoder architecture, which is much similar to machine translation. Results in a 20,000 hours Mandarin speech recognition task show that the proposed spelling correction model can achieve a CER of 3.41 , which results in 22.9 and 53.2 relative improvement compared to the baseline CTC-based systems decoded with and without language model respectively.
Automatic correction of recognition errors is crucial not only to improve the performance of ASR system but also to avoid the propagation of errors to the post process (e.g. machine translation, natural language processing). @cite_24 presents an overview of previous work on error correction for ASR. However, most of researches were limited to the detection @cite_4 @cite_2 @cite_26 and just few researches addressed the correction process of ASR errors. In @cite_28 , it built an ASR errors detector and corrector using co-occurrence analysis. @cite_23 proposed a post-editing ASR errors correction method based on Microsoft N-gram dataset. More recently, work in @cite_20 @cite_18 proposed to use the attention based sequence-to-sequence model to automatically correct the ASR errors, which is much similar to our work.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_28", "@cite_24", "@cite_23", "@cite_2", "@cite_20" ], "mid": [ "2949174760", "1571874193", "", "2041670430", "2797303992", "1633171621", "21942408", "2745596852" ], "abstract": [ "Attention-based sequence-to-sequence models for speech recognition jointly train an acoustic model, language model (LM), and alignment mechanism using a single neural network and require only parallel audio-text pairs. Thus, the language model component of the end-to-end model is only trained on transcribed audio-text pairs, which leads to performance degradation especially on rare words. While there have been a variety of work that look at incorporating an external LM trained on text-only data into the end-to-end framework, none of them have taken into account the characteristic error distribution made by the model. In this paper, we propose a novel approach to utilizing text-only data, by training a spelling correction (SC) model to explicitly correct those errors. On the LibriSpeech dataset, we demonstrate that the proposed model results in an 18.6 relative improvement in WER over the baseline model when directly correcting top ASR hypothesis, and a 29.0 relative improvement when further rescoring an expanded n-best list using an external LM.", "This article addresses error detection in broadcast news automatic transcription, as a post-processing stage. Based on the observation that many errors appear in bursts, we investigated the use of Markov Chains (MC) for their temporal modelling capabilities. Experiments were conducted on a large Amercian English broadcast news corpus from NIST. Common features in error detection were used, all decoder-based. MC classification performance was compared with a discriminative maximum entropy model (Maxent), currently used in our in-house decoder to estimate confidence measures, and also with Gaussian Mixture Models (GMM). The MC classifier obtained the best results, by detecting 16.2 of the errors, with the lowest classification error rate of 16.7 . To be compared with the GMM classifier, MC allowed to lower the number of false detections, by 23.5 relative. The Maxent system achieved the same CER, but detected only 7.2 of the errors.", "", "In this paper we present preliminary results of a novel unsupervised approach for high-precision detection and correction of errors in the output of automatic speech recognition systems. We model the likely contexts of all words in an ASR system vocabulary by performing a lexical co-occurrence analysis using a large corpus of output from the speech system. We then identify regions in the data that contain likely contexts for a given query word. Finally, we detect words or sequences of words in the contextual regions that are unlikely to appear in the context and that are phonetically similar to the query word. Initial experiments indicate that this technique can produce high-precision targeted detection and correction of misrecognized query words.", "Abstract Even though Automatic Speech Recognition (ASR) has matured to the point of commercial applications, high error rate in some speech recognition domains remain as one of the main impediment factors to the wide adoption of speech technology, and especially for continuous large vocabulary speech recognition applications. The persistent presence of ASR errors have intensified the need to find alternative techniques to automatically detect and correct such errors. The correction of the transcription errors is very crucial not only to improve the speech recognition accuracy, but also to avoid the propagation of the errors to the subsequent language processing modules such as machine translation. In this paper, basic principles of ASR evaluation are first summarized, and then the state of the current ASR errors detection and correction research is reviewed. We focus on emerging techniques using word error rate metric.", "At the present time, computers are employed to solve complex tasks and problems ranging from simple calculations to intensive digital image processing and intricate algorithmic optimization problems to computationally-demanding weather forecasting problems. ASR short for Automatic Speech Recognition is yet another type of computational problem whose purpose is to recognize human spoken speech and convert it into text that can be processed by a computer. Despite that ASR has many versatile and pervasive real-world applications,it is still relatively erroneous and not perfectly solved as it is prone to produce spelling errors in the recognized text, especially if the ASR system is operating in a noisy environment, its vocabulary size is limited, and its input speech is of bad or low quality. This paper proposes a post-editing ASR error correction method based on MicrosoftN-Gram dataset for detecting and correcting spelling errors generated by ASR systems. The proposed method comprises an error detection algorithm for detecting word errors; a candidate corrections generation algorithm for generating correction suggestions for the detected word errors; and a context-sensitive error correction algorithm for selecting the best candidate for correction. The virtue of using the Microsoft N-Gram dataset is that it contains real-world data and word sequences extracted from the web which canmimica comprehensive dictionary of words having a large and all-inclusive vocabulary. Experiments conducted on numerous speeches, performed by different speakers, showed a remarkable reduction in ASR errors. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor and distributed systems.", "", "Connectionist Temporal Classification has recently attracted a lot of interest as it offers an elegant approach to building acoustic models (AMs) for speech recognition. The CTC loss function maps an input sequence of observable feature vectors to an output sequence of symbols. Output symbols are conditionally independent of each other under CTC loss, so a language model (LM) can be incorporated conveniently during decoding, retaining the traditional separation of acoustic and linguistic components in ASR. For fixed vocabularies, Weighted Finite State Transducers provide a strong baseline for efficient integration of CTC AMs with n-gram LMs. Character-based neural LMs provide a straight forward solution for open vocabulary speech recognition and all-neural models, and can be decoded with beam search. Finally, sequence-to-sequence models can be used to translate a sequence of individual sounds into a word string. We compare the performance of these three approaches, and analyze their error patterns, which provides insightful guidance for future research and development in this important area." ] }
1904.09856
2939084915
This paper presents a new deep-learning based method to simultaneously calibrate the intrinsic parameters of fisheye lens and rectify the distorted images. Assuming that the distorted lines generated by fisheye projection should be straight after rectification, we propose a novel deep neural network to impose explicit geometry constraints onto processes of the fisheye lens calibration and the distorted image rectification. In addition, considering the nonlinearity of distortion distribution in fisheye images, the proposed network fully exploits multi-scale perception to equalize the rectification effects on the whole image. To train and evaluate the proposed model, we also create a new largescale dataset labeled with corresponding distortion parameters and well-annotated distorted lines. Compared with the state-of-the-art methods, our model achieves the best published rectification quality and the most accurate estimation of distortion parameters on a large set of synthetic and real fisheye images.
To mitigate the difficulty of detecting geometric objects in distorted images, the deep learning methods were proposed @cite_20 @cite_13 which imposed the representational features learned by CNNs to the processes of fisheye calibration and image rectification. Among them, the FishEyeRecNet @cite_13 proposed an end-to-end CNNs which introduce scene parsing semantic into the rectification of fisheye images. It has reported promising results, but it is still not clear which kind of high-level geometric information learned from their networks are important for fisheye image rectification. Moreover, The works @cite_15 @cite_31 @cite_35 explicit geometry like plumb-lines' are very efficient for distortion corrections, but how to encode them with CNNs in an effective way is still an open problem.
{ "cite_N": [ "@cite_35", "@cite_15", "@cite_31", "@cite_13", "@cite_20" ], "mid": [ "2021581587", "1910129379", "", "2796773185", "2592756114" ], "abstract": [ "We present a method to automatically correct the radial distortion caused by wide-angle lenses using the distorted lines generated by the projection of 3D straight lines onto the image. Lens distortion is estimated by the division model using one parameter, which allows to state the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image. This paper describes an algorithm which applies this technique, providing all the details of the design of an improved Hough transform. We perform experiments using calibration patterns and real scenes showing a strong distortion to illustrate the performance of the proposed method. Source Code The source code, the code documentation, and the online demo are accessible at the IPOL web page of this article 1 . In this page, an implementation is available for download. Compilation and usage instructions are included in the README.txt file of the archive.", "Fisheye image rectification and estimation of intrinsic parameters for real scenes have been addressed in the literature by using line information on the distorted images. In this paper, we propose an easily implemented fisheye image rectification algorithm with line constrains in the undistorted perspective image plane. A novel Multi-Label Energy Optimization (MLEO) method is adopted to merge short circular arcs sharing the same or the approximately same circular parameters and select long circular arcs for camera rectification. Further we propose an efficient method to estimate intrinsic parameters of the fisheye camera by automatically selecting three properly arranged long circular arcs from previously obtained circular arcs in the calibration procedure. Experimental results on a number of real images and simulated data show that the proposed method can achieve good results and outperforms the existing approaches and the commercial software in most cases.", "", "Images captured by fisheye lenses violate the pinhole camera assumption and suffer from distortions. Rectification of fisheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single fisheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model significantly outperforms current state of the art methods. Our code and synthesized dataset will be made publicly available.", "Radial lens distortion often exists in images taken by common cameras, which violates the assumption of pinhole camera model. Estimating the radial lens distortion of an image is an important preprocessing step for many vision applications. This paper intends to employ CNNs (Convolutional Neural Networks), to achieve radial distortion correction. However, the main issue hinder its progress is the scarcity of training data with radial distortion annotations. Inspired by the growing availability of image dataset with non-radial distortion, we propose a framework to address the issue by synthesizing images with radial distortion for CNNs. We believe that a large number of images of high variation of radial distortion is generated, which can be well exploited by deep CNN with a high learning capacity. We present quantitative results that demonstrate the ability of our technique to estimate the radial distortion with comparisons against several baseline methods, including an automatic method based on Hough transforms of distorted line images." ] }
1904.09856
2939084915
This paper presents a new deep-learning based method to simultaneously calibrate the intrinsic parameters of fisheye lens and rectify the distorted images. Assuming that the distorted lines generated by fisheye projection should be straight after rectification, we propose a novel deep neural network to impose explicit geometry constraints onto processes of the fisheye lens calibration and the distorted image rectification. In addition, considering the nonlinearity of distortion distribution in fisheye images, the proposed network fully exploits multi-scale perception to equalize the rectification effects on the whole image. To train and evaluate the proposed model, we also create a new largescale dataset labeled with corresponding distortion parameters and well-annotated distorted lines. Compared with the state-of-the-art methods, our model achieves the best published rectification quality and the most accurate estimation of distortion parameters on a large set of synthetic and real fisheye images.
Another topic closely related to our work is distorted lines extraction in fisheye images. Various arc detection methods and optimizing strategies have been utilized in the calibrating process @cite_25 @cite_27 @cite_22 @cite_35 @cite_15 , but they are not robust to detect arcs especially in the environments with noises or texture absence. Although recent deep learning based methods @cite_26 @cite_3 @cite_34 @cite_12 @cite_0 show a promising performance on edge detection, none of them is well qualified to deal with ditorted lines in fisheye images.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22", "@cite_3", "@cite_0", "@cite_27", "@cite_15", "@cite_34", "@cite_25", "@cite_12" ], "mid": [ "2021581587", "1976047850", "2075544016", "2560622558", "", "2151941159", "1910129379", "1930528368", "2105060035", "2963357722" ], "abstract": [ "We present a method to automatically correct the radial distortion caused by wide-angle lenses using the distorted lines generated by the projection of 3D straight lines onto the image. Lens distortion is estimated by the division model using one parameter, which allows to state the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image. This paper describes an algorithm which applies this technique, providing all the details of the design of an improved Hough transform. We perform experiments using calibration patterns and real scenes showing a strong distortion to illustrate the performance of the proposed method. Source Code The source code, the code documentation, and the online demo are accessible at the IPOL web page of this article 1 . In this page, an implementation is available for download. Compilation and usage instructions are included in the README.txt file of the archive.", "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.", "Many computer vision algorithms rely on the assumptions of the pinhole camera model, but lens distortion with off-the-shelf cameras is usually significant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial distortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. We propose a new method for automatic radial distortion estimation based on the plumb-line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon's division model, robust estimation of circular arcs, and robust estimation of distortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how different circle fitting methods contribute to accurate distortion parameter estimation. We finally provide qualitative results on a wide variety of challenging real images. The experiments demonstrate the method's ability to accurately identify distortion parameters and remove distortion from images.", "Edge detection is a fundamental problem in computer vision. Recently, convolutional neural networks (CNNs) have pushed forward this field significantly. Existing methods which adopt specific layers of deep CNNs may fail to capture complex data structures caused by variations of scales and aspect ratios. In this paper, we propose an accurate edge detector using richer convolutional features (RCF). RCF encapsulates all convolutional features into more discriminative representation, which makes good usage of rich feature hierarchies, and is amenable to training via backpropagation. RCF fully exploits multiscale and multilevel information of objects to perform the image-to-image prediction holistically. Using VGG16 network, we achieve state-of-the-art performance on several available datasets. When evaluating on the well-known BSDS500 benchmark, we achieve ODS F-measure of 0.811 while retaining a fast speed (8 FPS). Besides, our fast version of RCF achieves ODS F-measure of 0.806 with 30 FPS. We also demonstrate the versatility of the proposed method by applying RCF edges for classical image segmentation.", "", "Nowadays, robotic systems are more and more equipped with catadioptric cameras. However several problems associated to catadioptric vision have been studied only slightly. Especially algorithms for detecting rectangles in catadioptric images have not yet been developed whereas it is required in diverse applications such as building extraction in aerial images. We show that working in the equivalent sphere provides an appropriate framework to detect lines, parallelism, orthogonality and therefore rectangles. Finally, we present experimental results on synthesized and real data.", "Fisheye image rectification and estimation of intrinsic parameters for real scenes have been addressed in the literature by using line information on the distorted images. In this paper, we propose an easily implemented fisheye image rectification algorithm with line constrains in the undistorted perspective image plane. A novel Multi-Label Energy Optimization (MLEO) method is adopted to merge short circular arcs sharing the same or the approximately same circular parameters and select long circular arcs for camera rectification. Further we propose an efficient method to estimate intrinsic parameters of the fisheye camera by automatically selecting three properly arranged long circular arcs from previously obtained circular arcs in the calibration procedure. Experimental results on a number of real images and simulated data show that the proposed method can achieve good results and outperforms the existing approaches and the commercial software in most cases.", "Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection.", "The use of super-wide angle and fish-eye lenses causes strong distortions in the resulting images. A methodology for the correction of distortions in these cases using only single images and linearity of imaged objects is presented. Contrary to most former algorithms, the algorithm discussed here does not depend on information about the real world co-ordinates of matching points. Moreover reference points determination and camera calibration is not required in this case. The algorithm is based on circle fitting. It requires only the possibility of the extraction of distorted image points from straight lines in the 3D scene. Further, the actual distortion must approximately fit the chosen distortion model. For most fish-eye lenses appropriate distortion correction results can be obtained.", "" ] }
1904.10151
2942354372
One of the long-term challenges of robotics is to enable humans to communicate with robots about the world. It is essential if they are to collaborate. Humans are visual animals, and we communicate primarily through language, so human-robot communication is inevitably at least partly a vision-and-language problem. This has motivated both Referring Expression datasets, and Vision and Language Navigation datasets. These partition the problem into that of identifying an object of interest, or navigating to another location. Many of the most appealing uses of robots, however, require communication about remote objects and thus do not reflect the dichotomy in the datasets. We thus propose the first Remote Embodied Referring Expression dataset of natural language references to remote objects in real images. Success requires navigating through a previously unseen environment to select an object identified through general natural language. This represents a complex challenge, but one that closely reflects one of the core visual problems in robotics. A Navigator-Pointer model which provides a strong baseline on the task is also proposed.
The referring expression task aims to localise an object in an image given a natural language description. Recent works cast this task as looking for the object that can generate its paired expressions @cite_7 @cite_31 @cite_12 or jointly embedding the image and expression for matching estimation @cite_32 @cite_9 @cite_5 @cite_13 . Yu al @cite_12 propose to compute the appearance difference of the same category objects to enhance the visual features for expression generation. Instead of treating expressions as a unit, @cite_13 learns a language attention model to decompose expressions into appearance, location, and object relationship for each component.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_32", "@cite_5", "@cite_31", "@cite_13", "@cite_12" ], "mid": [ "2963735856", "2558535589", "2745166287", "2779827764", "2583360688", "2784458614", "2949107813" ], "abstract": [ "In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.", "People often refer to entities in an image in terms of their relationships with other entities. For example, the black cat sitting under the table refers to both a black cat entity and its relationship with another table entity. Understanding these relationships is essential for interpreting and grounding such natural language expressions. Most prior work focuses on either grounding entire referential expressions holistically to one region, or localizing relationships based on a fixed set of categories. In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene. We call this approach Compositional Modular Networks (CMNs): a novel architecture that learns linguistic analysis and visual inference end-to-end. Our approach is built around two types of neural modules that inspect local regions and pairwise interactions between regions. We evaluate CMNs on multiple referential expression datasets, outperforming state-of-the-art approaches on all tasks.", "Given a textual description of an image, phrase grounding localizes objects in the image referred by query phrases in the description. State-of-the-art methods address the problem by ranking a set of proposals based on the relevance to each query, which are limited by the performance of independent proposal generation systems and ignore useful cues from context in the description. In this paper, we adopt a spatial regression method to break the performance limit, and introduce reinforcement learning techniques to further leverage semantic context information. We propose a novel Query-guided Regression network with Context policy (QRC Net) which jointly learns a Proposal Generation Network (PGN), a Query-guided Regression Network (QRN) and a Context Policy Network (CPN). Experiments show QRC Net provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Referit Game, with 14.25 and 17.14 increase over the state-of-the-arts respectively.", "Referring expression is a kind of language expression that used for referring to particular objects. To make the expression without ambiguation, people often use attributes to describe the particular object. In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension. We first train an attribute learning model from visual objects and their paired descriptions. Then in the generation task, we take the learned attributes as the input into the generation model, thus the expressions are generated driven by both attributes and the previous words. For comprehension, we embed the learned attributes with visual features and semantics into the common space model, then the target object is retrieved based on its ranking distance in the common space. Experimental results on the three standard datasets, RefCOCO, RefCOCO+, and RefCOCOg show significant improvements over the baseline model, demonstrating that our method is effective for both tasks.", "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.", "In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks.", "Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring expression generation and comprehension." ] }
1904.10151
2942354372
One of the long-term challenges of robotics is to enable humans to communicate with robots about the world. It is essential if they are to collaborate. Humans are visual animals, and we communicate primarily through language, so human-robot communication is inevitably at least partly a vision-and-language problem. This has motivated both Referring Expression datasets, and Vision and Language Navigation datasets. These partition the problem into that of identifying an object of interest, or navigating to another location. Many of the most appealing uses of robots, however, require communication about remote objects and thus do not reflect the dichotomy in the datasets. We thus propose the first Remote Embodied Referring Expression dataset of natural language references to remote objects in real images. Success requires navigating through a previously unseen environment to select an object identified through general natural language. This represents a complex challenge, but one that closely reflects one of the core visual problems in robotics. A Navigator-Pointer model which provides a strong baseline on the task is also proposed.
Embodied question answering (EQA) @cite_29 requires an agent to answer a question about an object or a room, such as What colour is the car?' and What room is the @math OBJ @math in?'. The subject of the question may be invisible from the initial location of the agent, so navigation may be required. The House3D EQA dataset @cite_29 is based on synthetic images and includes nine kinds of predefined question template. Gordon al @cite_6 introduce an interactive version of the EQA task, where the agent may need to interact with the environment objects to correctly answer questions. Our Remote Embodied Referring Expression task also falls within the area of embodied vision-and-language, but different from previous works that only output a simple answer or a series of actions, we ask the agent to put a bounding box around a target object. This is a more challenging but more realistic setting because if we want a robot to manipulate an object in an environment, we need its precise location and little more.
{ "cite_N": [ "@cite_29", "@cite_6" ], "mid": [ "2950697717", "2772262724" ], "abstract": [ "We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where an agent is spawned at a random location in a 3D environment and asked a natural language question (\"What color is the car?\"). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question (\"orange\"). This challenging task requires a range of AI skills -- active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.", "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN) consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction, reducing the diversity of the action space available to each controller and enabling an easier training paradigm. We introduce IQADATA, a new Interactive Question Answering dataset built upon AI2-THOR, a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQADATA has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQADATA." ] }
1904.10056
2941138665
We investigate learning feature-to-feature translator networks by alternating back-propagation as a general-purpose solution to zero-shot learning (ZSL) problems. Our method can be categorized to a generative model-based ZSL one. In contrast to the GAN or VAE that requires auxiliary networks to assist the training, our model consists of a single conditional generator that maps the class feature and a latent vector accounting for randomness in the output to the image feature, and is trained by maximum likelihood estimation. The training process is a simple yet effective EM-like process that iterates the following two steps: (i) the inferential back-propagation to infer the latent noise vector of each observed data, and (ii) the learning back-propagation to update the parameters of the model. With slight modifications of our model, we also provide a solution to learning from incomplete visual features for ZSL. We conduct extensive comparisons with existing generative ZSL methods on five benchmarks, demonstrating the superiority of our method in not only performance but also convergence speed and computational cost. Specifically, our model outperforms the existing state-of-the-art methods by a remarkable margin up to @math and @math in ZSL and generalized ZSL settings, respectively.
. We are not the first to apply conditional generators to learn X-to-Y mapping. A text-to-image translation is proposed for image synthesis from text description @cite_36 . Zhu @cite_41 has studied image-to-image translation problem for different types of image processing tasks, which include synthesizing photo from label maps or edge maps and colorizing images from grey-scaled images. Recently, video-to-video translation problem has been tackled by learning a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to a target realistic video @cite_44 . Our work learns a feature-to-feature mapping for ZSL. Additionally, all other works mentioned above are based on the framework of GANs, which means that a well-designed discriminator network needs to be resorted to for training. Our framework differs in that it is trained by alternating back-propagation without incorporating any extra networks. This makes our setup considerably simpler and more efficient than most others.
{ "cite_N": [ "@cite_36", "@cite_41", "@cite_44" ], "mid": [ "2949999304", "1840847274", "" ], "abstract": [ "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious \"momentum\" variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor - a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.", "" ] }
1904.10117
2950488159
Modeling relation between actors is important for recognizing group activity in a multi-person scene. This paper aims at learning discriminative relation between actors efficiently using deep models. To this end, we propose to build a flexible and efficient Actor Relation Graph (ARG) to simultaneously capture the appearance and position relation between actors. Thanks to the Graph Convolutional Network, the connections in ARG could be automatically learned from group activity videos in an end-to-end manner, and the inference on ARG could be efficiently performed with standard matrix operations. Furthermore, in practice, we come up with two variants to sparsify ARG for more effective modeling in videos: spatially localized ARG and temporal randomized ARG. We perform extensive experiments on two standard group activity recognition datasets: the Volleyball dataset and the Collective Activity dataset, where state-of-the-art performance is achieved on both datasets. We also visualize the learned actor graphs and relation features, which demonstrate that the proposed ARG is able to capture the discriminative relation information for group activity recognition.
Neural networks on graphs. Recently, integrating graphical models with deep neural networks is an emerging topic in deep learning research. A considerable amount of models has arisen for reasoning on graph-structured data at various tasks, such as classification of graphs @cite_8 @cite_68 @cite_2 @cite_51 @cite_26 , classification of nodes in graphs @cite_63 @cite_39 @cite_16 , and modeling multi-agent interacting physical systems @cite_4 @cite_40 @cite_55 @cite_44 . In our work, we apply the Graph Convolutional Network (GCN) @cite_63 which was originally proposed for semi-supervised learning on the problem of classifying nodes in a graph. There are also applications of GCNs to single-human action recognition problems @cite_71 @cite_45 . However, it would be inefficient to compute all pair-wise relation across all video-frame to build video as a fully-connected graph. Therefore, we build multi-person scene as a sparse graph according to relative location. Meanwhile, we propose to combine GCN with sparse temporal sampling strategy @cite_46 for more efficient learning.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_55", "@cite_16", "@cite_39", "@cite_44", "@cite_40", "@cite_45", "@cite_63", "@cite_2", "@cite_71", "@cite_46", "@cite_68", "@cite_51" ], "mid": [ "", "2787337315", "", "", "", "2962767366", "2658702172", "2402402867", "2951656361", "", "2406128552", "2784435047", "", "2950191616", "" ], "abstract": [ "", "Interacting systems are prevalent in nature, from dynamical systems in physics to complex societ al dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system's constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.", "", "", "", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems, INs scale with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multi-agent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.", "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "How do humans recognize the action \"opening a book\" ? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on both Charades and Something-Something datasets. Especially for Charades, we obtain a huge 4.4 gain when our model is applied in complex environments.", "", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "Dynamics of human body skeletons convey significant information for human action recognition. Conventional approaches for modeling skeletons usually rely on hand-crafted parts or traversal rules, thus resulting in limited expressive power and difficulties of generalization. In this work, we propose a novel model of dynamic skeletons called Spatial-Temporal Graph Convolutional Networks (ST-GCN), which moves beyond the limitations of previous methods by automatically learning both the spatial and temporal patterns from data. This formulation not only leads to greater expressive power but also stronger generalization capability. On two large datasets, Kinetics and NTU-RGBD, it achieves substantial improvements over mainstream methods.", "", "Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are @math times smaller, while at the same time achieving the state-of-the-art predictive performance.", "" ] }
1904.09981
2940562175
Graph Neural Networks (GNNs) have been popularly used for analyzing non-Euclidean data such as social network data and biological data. Despite their success, the design of graph neural networks requires a lot of manual work and domain knowledge. In this paper, we propose a Graph Neural Architecture Search method (GraphNAS for short) that enables automatic search of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation data set. Extensive experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that GraphNAS can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network. On node classification tasks, GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy.
. The notation of graph neural networks was firstly outlined in the work @cite_9 . Inspired by the convolutional networks in computer vision, a large number of methods that re-define the notation of convolution filter for graph data have been proposed recently. convolution filters for graph data fall into two categories, spectral-based and spatial-based.
{ "cite_N": [ "@cite_9" ], "mid": [ "1501856433" ], "abstract": [ "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model." ] }
1904.09981
2940562175
Graph Neural Networks (GNNs) have been popularly used for analyzing non-Euclidean data such as social network data and biological data. Despite their success, the design of graph neural networks requires a lot of manual work and domain knowledge. In this paper, we propose a Graph Neural Architecture Search method (GraphNAS for short) that enables automatic search of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation data set. Extensive experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that GraphNAS can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network. On node classification tasks, GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy.
As spectral methods usually handle the whole graph simultaneously and are difficult to parallel or scale to large graphs, spatial-based graph convolutional networks have rapidly developed recently @cite_0 @cite_1 @cite_7 @cite_14 @cite_3 . These methods directly perform the convolution in the graph domain by aggregating the neighbor nodes’ information. Together with sampling strategies, the computation can be performed in a batch of nodes instead of the whole graph @cite_0 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_3", "@cite_0" ], "mid": [ "2809418595", "2964145825", "2558460151", "", "2962767366" ], "abstract": [ "Convolutional neural networks (CNNs) have achieved great success on grid-like data such as images, but face tremendous challenges in learning from more generic data such as graphs. In CNNs, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the receptive fields. However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations. Here, we address these challenges by proposing the learnable graph convolutional layer (LGCL). LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, we propose a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions. Our experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that our methods can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network datasets. Our results also indicate that the proposed methods using sub-graph training strategy are more efficient as compared to prior approaches.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.", "", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions." ] }
1904.09981
2940562175
Graph Neural Networks (GNNs) have been popularly used for analyzing non-Euclidean data such as social network data and biological data. Despite their success, the design of graph neural networks requires a lot of manual work and domain knowledge. In this paper, we propose a Graph Neural Architecture Search method (GraphNAS for short) that enables automatic search of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation data set. Extensive experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that GraphNAS can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network. On node classification tasks, GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy.
Recent graph neural architectures follow the neighborhood aggregation scheme that consists of three types of functions, i.e., neighbor sampling, correlation measurement, and information aggregation. Each layer of GNNs includes the combination of the three types of functions. For example, each layer of semi-GCN @cite_4 consists of the first-order neighbor sampling, correlation measured by node's degree and the aggregate function.
{ "cite_N": [ "@cite_4" ], "mid": [ "2964015378" ], "abstract": [ "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1904.09981
2940562175
Graph Neural Networks (GNNs) have been popularly used for analyzing non-Euclidean data such as social network data and biological data. Despite their success, the design of graph neural networks requires a lot of manual work and domain knowledge. In this paper, we propose a Graph Neural Architecture Search method (GraphNAS for short) that enables automatic search of the best graph neural architecture based on reinforcement learning. Specifically, GraphNAS first uses a recurrent network to generate variable-length strings that describe the architectures of graph neural networks, and then trains the recurrent network with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation data set. Extensive experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that GraphNAS can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network. On node classification tasks, GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy.
. Neural architecture search (NAS) has been popularly used to design convolutional architectures for classification tasks with image and text streaming as input @cite_10 @cite_6 @cite_17 @cite_12 @cite_19 .
{ "cite_N": [ "@cite_6", "@cite_19", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2785366763", "2962716258", "2553303224", "2964081807", "2962847160" ], "abstract": [ "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a specific domain language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. These optimizers can also be transferred to perform well on different neural network architectures, including Google's neural machine translation system.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "" ] }
1904.09853
2937120648
Global Average Pooling (GAP) is used by default on the channel-wise attention mechanism to extract channel descriptors. However, the simple global aggregation method of GAP is easy to make the channel descriptors have homogeneity, which weakens the detail distinction between feature maps, thus affecting the performance of the attention mechanism. In this work, we propose a novel method for channel-wise attention network, called Stochastic Region Pooling (SRP), which makes the channel descriptors more representative and diversity by encouraging the feature map to have more or wider important feature responses. Also, SRP is the general method for the attention mechanisms without any additional parameters or computation. It can be widely applied to attention networks without modifying the network structure. Experimental results on image recognition datasets including CIAFR-10 100, ImageNet and three Fine-grained datasets (CUB-200-2011, Stanford Cars and Stanford Dogs) show that SRP brings the significant improvements of the performance over efficient CNNs and achieves the state-of-the-art results.
. @math A common way to obtain high-quality feature maps is to find efficient network structures, such as @cite_17 @cite_36 @cite_22 @cite_48 @cite_23 to extract more and better features. However, the feature maps learned by these networks are still not diverse enough. Another way is regularization. Some regularization for channel can maintain high quality channels by removing or retraining the inefficient channels @cite_11 @cite_31 @cite_52 @cite_7 . For @cite_11 @cite_31 , they will change the network structure. And for @cite_52 @cite_7 , we have not found the evidence to prove that they are suitable for channel-wise attention neural networks. Other regularizations such as dropout @cite_53 , droppath @cite_40 , dropblock @cite_49 , cutout @cite_20 can enhance the robustness of the feature by introducing randomness. And our method is closely related to Dropblock @cite_49 which drops spatially correlated information to promote the network to reconstruct the important features from its surrounding. However, our method is aims to solve the problem that the descriptor in the channel-wise attention network contains few detailed information of the feature map, such as by promoting the feature map to have more or wider important feature responses.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_7", "@cite_36", "@cite_48", "@cite_53", "@cite_52", "@cite_40", "@cite_23", "@cite_49", "@cite_31", "@cite_20", "@cite_17" ], "mid": [ "2963363373", "", "2905016238", "", "2511730936", "2095705004", "2896006880", "2963975324", "2963977677", "2890166761", "2479109623", "2746314669", "2962835968" ], "abstract": [ "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "", "", "", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide @math and @math savings in compute on VGG-16 and ResNet-18, both with less than @math top-5 accuracy loss.", "We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.", "Improving information flow in deep networks helps to ease the training difficulties and utilize parameters more efficiently. Here we propose a new convolutional neural network architecture with alternately updated clique (CliqueNet). In contrast to prior networks, there are both forward and backward connections between any two layers in the same block. The layers are constructed as a loop and are updated alternately. The CliqueNet has some unique properties. For each layer, it is both the input and output of any other layer in the same block, so that the information flow among layers is maximized. During propagation, the newly updated layers are concatenated to re-update previously updated layer, and parameters are reused for multiple times. This recurrent feedback structure is able to bring higher level visual information back to refine low-level filters and achieve spatial attention. We analyze the features generated at different stages and observe that using refined features leads to a better result. We adopt a multiscale feature strategy that effectively avoids the progressive growth of parameters. Experiments on image recognition datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet show that our proposed models achieve the state-of-the-art performance with fewer parameters1.", "Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that neurons in a contiguous region in convolutional layers are strongly correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where neurons in a contiguous region of a feature map are dropped together. Extensive experiments show that DropBlock works much better than dropout in regularizing convolutional networks. On ImageNet, DropBlock with ResNet-50 architecture achieves 77.65 accuracy, which is more than 1 improvement on the previous result of this architecture.", "Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle dierences in some specific parts. Most previous works rely on object part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic finegrained recognition approach which is free of any object part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200- 2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.", "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1904.09853
2937120648
Global Average Pooling (GAP) is used by default on the channel-wise attention mechanism to extract channel descriptors. However, the simple global aggregation method of GAP is easy to make the channel descriptors have homogeneity, which weakens the detail distinction between feature maps, thus affecting the performance of the attention mechanism. In this work, we propose a novel method for channel-wise attention network, called Stochastic Region Pooling (SRP), which makes the channel descriptors more representative and diversity by encouraging the feature map to have more or wider important feature responses. Also, SRP is the general method for the attention mechanisms without any additional parameters or computation. It can be widely applied to attention networks without modifying the network structure. Experimental results on image recognition datasets including CIAFR-10 100, ImageNet and three Fine-grained datasets (CUB-200-2011, Stanford Cars and Stanford Dogs) show that SRP brings the significant improvements of the performance over efficient CNNs and achieves the state-of-the-art results.
. @math Channel-wise attention mechanisms have developed rapidly in recent years @cite_42 @cite_4 @cite_6 @cite_14 @cite_27 @cite_51 and the channel descriptors are crucial to them. In order to extract more representative descriptors, CBAM @cite_51 combines the output of the global max pooling and the global average pooling as the pooling method, and GEnet @cite_27 uses the depth-wise convolution with large kernel to replace GAP. However, the global max pooling in CBAM is prone to network overfitting @cite_21 , and CBAM also cannot enhance the channel descriptor with more spatial details. Besides, GENet will brings a lot of additional parameters or computation. Different from them, SRP is a training method that can encourage the descriptors to have more information about the feature map details. The reason why SRP uses GAP instead of the above methods to extract the descriptor is not only because GAP is simple and does not bring any additional parameters, but also that GAP is widely used for attention networks. This enables SRP to be conveniently used on these networks without modifying the network structure.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_42", "@cite_21", "@cite_6", "@cite_27", "@cite_51" ], "mid": [ "2963420686", "2550553598", "2756464018", "1907282891", "2962926870", "2963984455", "2884585870" ], "abstract": [ "Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, several recent approaches have shown the benefit of enhancing spatial encoding. In this work, we focus on the channel relationship and propose a novel architectural unit, which we term the \"Squeeze-and-Excitation\" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We demonstrate that by stacking these blocks together, we can construct SENet architectures that generalise extremely well across challenging datasets. Crucially, we find that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost. SENets formed the foundation of our ILSVRC 2017 classification submission which won first place and significantly reduced the top-5 error to 2.251 , achieving a 25 relative improvement over the winning entry of 2016. Code and models are available at https: github.com hujie-frank SENet.", "Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.", "Visual saliency analysis detects salient regions objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.", "We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.", "Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors. In this work, we show the advantages of jointly learning attention selection and feature representation in a Convolutional Neural Network (CNN) by maximising the complementary information of different levels of visual attention subject to re-id discriminative learning constraints. Specifically, we formulate a novel Harmonious Attention CNN (HA-CNN) model for joint learning of soft pixel attention and hard regional attention along with simultaneous optimisation of feature representations, dedicated to optimise person re-id in uncontrolled (misaligned) images. Extensive comparative evaluations validate the superiority of this new HA-CNN model for person re-id over a wide variety of state-of-the-art methods on three large-scale benchmarks including CUHK03, Market-1501, and DukeMTMC-ReID.", "While the use of bottom-up local operators in convolutional neural networks (CNNs) matches well some of the statistics of natural images, it may also prevent such models from capturing contextual long-range feature interactions. In this work, we propose a simple, lightweight approach for better context exploitation in CNNs. We do so by introducing a pair of operators: gather, which efficiently aggregates feature responses from a large spatial extent, and excite, which redistributes the pooled information to local features. The operators are cheap, both in terms of number of added parameters and computational complexity, and can be integrated directly in existing architectures to improve their performance. Experiments on several datasets show that gather-excite can bring benefits comparable to increasing the depth of a CNN at a fraction of the cost. For example, we find ResNet-50 with gather-excite operators is able to outperform its 101-layer counterpart on ImageNet with no additional learnable parameters. We also propose a parametric gather-excite operator pair which yields further performance gains, relate it to the recently-introduced Squeeze-and-Excitation Networks, and analyse the effects of these changes to the CNN feature activation statistics.", "We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available." ] }
1904.09936
2939519298
Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications that this task lends itself to, such as surveillance, efficiency is a pivotal trait of a system. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for fewer frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41 of the entire video.
In reinforcement learning, an agent learns how to act through trial and error interactions with an environment. There are many different reinforcement learning approaches and we choose to use a model free actor-critic framework. We choose the actor critic framework because it estimates both a value function, for being in a certain state, and a policy function, to map the state to an action directly. Named based on their functionality, the value estimate is called the critic and is used to update the policy which is called the actor. We specifically use the Asynchronous Actor Critic method (A3C) @cite_36 which deploys multiple workers in parallel that each have their own network parameters. At the end of each episode the workers update a global set of parameters.
{ "cite_N": [ "@cite_36" ], "mid": [ "2964043796" ], "abstract": [ "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input." ] }
1904.09939
2949402734
Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.
Traditional feature representation based AU recognition methods focus on the design of more discriminative hand-crafted features. For instance, @cite_2 proposed to analyze the temporal behavior of action units in video and used individual feature GentleBoost templates built from Gabor wavelet features for SVM based AU classification. @cite_24 designed a facial AU intensity estimation and occurrence detection system based on the fusion of appearance and geometry features. However, these hand-crafted low level feature based algorithms are relatively fragile and difficult to deal with various types of complex situations. In recent years, deep convolutional neural networks have been widely used in a variety of computer vision tasks and have achieved unprecedented progress @cite_1 @cite_8 @cite_0 @cite_14 . There are also attempts to apply deep CNN to facial AU recognition @cite_12 @cite_11 @cite_23 @cite_4 . @cite_12 proposed a unified architecture for facial AU detection which incorporates a deep regional feature learning and multi-label learning modules. @cite_11 proposed an ROI framework for AU detection by cropping CNN feature maps with facial landmarks information. However, non of these methods explicitly take into consideration the linkage relationship between different AUs.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_1", "@cite_24", "@cite_0", "@cite_23", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2894961485", "2483843242", "2963951674", "2194775991", "1628791547", "2605929543", "2726020179", "2101965618", "2421475762", "2612215259" ], "abstract": [ "Facial landmark localization plays a critical role in face recognition and analysis. In this paper, we propose a novel cascaded Backbone-Branches Fully Convolutional Neural Network (BB-FCN) for rapidly and accurately localizing facial landmarks in unconstrained and cluttered settings. Our proposed BB-FCN generates facial landmark response maps directly from raw images without any pre-processing. It follows a coarse-to-fine cascaded pipeline, which consists of a backbone network for roughly detecting the locations of all facial landmarks and one branch network for each type of detected landmarks for further refining their locations. Extensive experimental evaluations demonstrate that our proposed BB-FCN can significantly outperform the state of the art under both constrained (i.e. within detected facial regions only) and unconstrained settings.", "Facial action units (AUs) are essential to decode human facial expressions. Researchers have focused on training AU detectors with a variety of features and classifiers. However, several issues remain. These are spatial representation, temporal modeling, and AU correlation. Unlike most studies that tackle these issues separately, we propose a hybrid network architecture to jointly address them. Specifically, spatial representations are extracted by a Convolutional Neural Network (CNN), which, as analyzed in this paper, is able to reduce person-specific biases caused by hand-crafted features (eg, SIFT and Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos. The outputs of CNNs and LSTMs are further aggregated into a fusion network to produce per-frame predictions of 12 AUs. Our network naturally addresses the three issues, and leads to superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, containing more than 400,000 frames coded with 12 AUs. On both datasets, we report significant improvement over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see facial AUs for the first time.", "Deep convolutional neural networks (CNNs) have become a key element in the recent breakthrough of salient object detection. However, existing CNN-based methods are based on either patchwise (regionwise) training and inference or fully convolutional networks. Methods in the former category are generally time-consuming due to severe storage and computational redundancies among overlapping patches. To overcome this deficiency, methods in the second category attempt to directly map a raw input image to a predicted dense saliency map in a single network forward pass. Though being very efficient, it is arduous for these methods to detect salient objects of different scales or salient regions with weak semantic information. In this paper, we develop hybrid contrast-oriented deep neural networks to overcome the aforementioned limitations. Each of our deep networks is composed of two complementary components, including a fully convolutional stream for dense prediction and a segment-level spatial pooling stream for sparse saliency inference. We further propose an attentional module that learns weight maps for fusing the two saliency predictions from these two streams. A tailored alternate scheme is designed to train these deep networks by fine-tuning pretrained baseline models. Finally, a customized fully connected conditional random field model incorporating a salient contour feature embedding can be optionally applied as a postprocessing step to improve spatial coherence and contour positioning in the fused result from these two streams. Extensive experiments on six benchmark data sets demonstrate that our proposed model can significantly outperform the state of the art in terms of all popular evaluation metrics.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Automatic detection of Facial Action Units (AUs) is crucial for facial analysis systems. Due to the large individual differences, performance of AU classifiers depends largely on training data and the ability to estimate facial expressions of a neutral face. In this paper, we present a real-time Facial Action Unit intensity estimation and occurrence detection system based on appearance (Histograms of Oriented Gradients) and geometry features (shape parameters and landmark locations). Our experiments show the benefits of using additional labelled data from different datasets, which demonstrates the generalisability of our approach. This holds both when training for a specific dataset or when a generic model is needed. We also demonstrate the benefits of using a simple and efficient median based feature normalisation technique that accounts for person-specific neutral expressions. Finally, we show that our results outperform the FERA 2015 baselines in all three challenge tasks - AU occurrence detection, fully automatic AU intensity and pre-segmented AU intensity estimation.", "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.", "The automatic detection of the activation of facial muscles, i.e. the detection of the so called facial Action Units (AUs), has received significant attention due to the application of facial expression analysis recognition in areas such as affect recognition or behavior analysis. However, the recognition of subtle expressions is a challenging task that requires a multimodal approach where several sources of information are used. In this paper, we follow such an approach and propose a novel Deep Learning architecture that fuses information from several specialized Deep Neural Networks (DNNs) each of which models a different aspect of the problem in question. At the core of our approach is a novel dynamic adaptation of the Deep Network cost function so as to deal with the data imbalances that are inherent in multilabel classification problems - this allows crossdatabase training.We show the benefits of the proposed training approach and how different architectures are more suitable for particular AUs. Extensive experimental results show that our multi-modal approach outperform the state of the art by a considerable margin.", "In this work we report on the progress of building a system that enables fully automated fast and robust facial expression recognition from face video. We analyse subtle changes in facial expression by recognizing facial muscle action units (AUs) and analysing their temporal behavior. By detecting AUs from face video we enable the analysis of various facial communicative signals including facial expressions of emotion, attitude and mood. For an input video picturing a facial expression we detect per frame whether any of 15 different AUs is activated, whether that facial action is in the onset, apex, or offset phase, and what the total duration of the activation in question is. We base this process upon a set of spatio-temporal features calculated from tracking data for 20 facial fiducial points. To detect these 20 points of interest in the first frame of an input face video, we utilize a fully automatic, facial point localization method that uses individual feature GentleBoost templates built from Gabor wavelet features. Then, we exploit a particle filtering scheme that uses factorized likelihoods and a novel observation model that combines a rigid and a morphological model to track the facial points. The AUs displayed in the input video and their temporal segments are recognized finally by Support Vector Machines trained on a subset of most informative spatio-temporal features selected by AdaBoost. For Cohn-Kanade andMMI databases, the proposed system classifies 15 AUs occurring alone or in combination with other AUs with a mean agreement rate of 90.2 with human FACS coders.", "Region learning (RL) and multi-label learning (ML) have recently attracted increasing attentions in the field of facial Action Unit (AU) detection. Knowing that AUs are active on sparse facial regions, RL aims to identify these regions for a better specificity. On the other hand, a strong statistical evidence of AU correlations suggests that ML is a natural way to model the detection task. In this paper, we propose Deep Region and Multi-label Learning (DRML), a unified deep network that simultaneously addresses these two problems. One crucial aspect in DRML is a novel region layer that uses feed-forward functions to induce important facial regions, forcing the learned weights to capture structural information of the face. Our region layer serves as an alternative design between locally connected layers (i.e., confined kernels to individual pixels) and conventional convolution layers (i.e., shared kernels across an entire image). Unlike previous studies that solve RL and ML alternately, DRML by construction addresses both problems, allowing the two seemingly irrelevant problems to interact more directly. The complete network is end-to-end trainable, and automatically learns representations robust to variations inherent within a local region. Experiments on BP4D and DISFA benchmarks show that DRML performs the highest average F1-score and AUC within and across datasets in comparison with alternative methods.", "Action Unit (AU) detection becomes essential for facial analysis. Many proposed approaches face challenging problems in dealing with the alignments of different face regions, in the effective fusion of temporal information, and in training a model for multiple AU labels. To better address these problems, we propose a deep learning framework for AU detection with region of interest (ROI) adaptation, integrated multi-label learning, and optimal LSTM-based temporal fusing. First, an ROI cropping net is designed to make sure specific interested regions of faces are learned independently, each sub-region has a local convolutional neural network (CNN) whose convolutional filters will only be trained for the corresponding region. Second, multi-label learning is employed to integrate the outputs of those individual ROI cropping nets, which learns the inter-relationships of various AUs and acquires global features across sub-regions for AU detection. Finally, the optimal selection of multiple LSTM layers are carried out to best fuse temporal features, in order to make the AU prediction the most accurate. The proposed approach is evaluated on two popular AU detection datasets, BP4D and DISFA, outperforming the state of the art significantly, with an average improvement of around 13 in BP4D and 25 in DISFA, respectively." ] }
1904.09939
2949402734
Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.
Considering the linkage effect of facial expressions on various AUs and the anatomical attribute of facial regions, there are research works which rely on action unit relationship modeling to help improve recognition accuracy. @cite_18 proposed a dynamic Bayesian network (DBN) to model relationships between AUs. @cite_7 further developed a three-layer Restrict Boltzmann Machine (RBM) to exploit the global relationships between AUs. However, these early works simply model the AU relations from target labels and are independent with feature representation. @cite_10 proposed a 4-layer RBM to simultaneously capture both feature level and label level dependencies for AU detection. As they are based on handcrafted low level features, the whole algorithm framework can not be performed end-to-end, which greatly restricts the model efficiency and the performance of the method. Recently, @cite_13 proposed a deep structured inference network (DSIN) for AU recognition which used deep learning to extract image features and structure inference to capture AU relations by passing information between predictions in an explicit way. However, the relationship inference part of DSIN also works as a post-processing step at label level and is isolated with the feature representation.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_10", "@cite_7" ], "mid": [ "2792605498", "2108445559", "2609450230", "2138648333" ], "abstract": [ "Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for developing general facial expression analysis. In recent years, most efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between Action Units. In this paper, we propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3 and 8.2 performance on BP4D and DISFA datasets, respectively.", "A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by improving either the facial feature extraction techniques or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently. In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. Such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.", "Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. To address this limitation, we propose a 4-layer Restrict Boltzmann Machine (RBM) to simultaneously capture feature level and AU level dependencies to recognize multiple AUs. Specifically, the bottom two layers of the RBM model capture dependencies among image features, while the top two layers capture the high order dependencies among AU labels. An efficient learning algorithm is introduced to jointly learn all layers to leverage the interactions among different layers. Experiments on two benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships from both features and labels jointly, and its improved performance over the existing methods.", "In this paper we tackle the problem of facial action unit (AU) recognition by exploiting the complex semantic relationships among AUs, which carry crucial top-down information yet have not been thoroughly exploited. Towards this goal, we build a hierarchical model that combines the bottom-level image features and the top-level AU relationships to jointly recognize AUs in a principled manner. The proposed model has two major advantages over existing methods. 1) Unlike methods that can only capture local pair-wise AU dependencies, our model is developed upon the restricted Boltzmann machine and therefore can exploit the global relationships among AUs. 2) Although AU relationships are influenced by many related factors such as facial expressions, these factors are generally ignored by the current methods. Our model, however, can successfully capture them to more accurately characterize the AU relationships. Efficient learning and inference algorithms of the proposed model are also developed. Experimental results on benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships as well as its superior AU recognition performance over existing approaches." ] }
1904.09882
2937778953
The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the person's body is out of view for a typical wearable camera. We propose a learning-based approach to estimate the camera wearer's 3D body pose from egocentric video sequences. Our key insight is to leverage interactions with another person---whose body pose we can directly observe---as a signal inherently linked to the body pose of the first-person subject. We show that since interactions between individuals often induce a well-ordered series of back-and-forth responses, it is possible to learn a temporal model of the interlinked poses even though one party is largely out of view. We demonstrate our idea on a variety of domains with dyadic interaction and show the substantial impact on egocentric body pose estimation, which improves the state of the art. Video results are available at this http URL
* -0.15in Recent egocentric vision work focuses primarily on recognizing objects @cite_26 , activities @cite_19 @cite_21 @cite_4 @cite_13 @cite_45 @cite_59 @cite_22 @cite_46 @cite_61 , visible hand and arm poses @cite_24 @cite_47 @cite_35 @cite_37 , eye gaze @cite_56 , or anticipating future camera trajectories @cite_52 @cite_33 . In contrast, we explore 3D pose estimation for the camera wearer's full body, and unlike any of the above methods, we show that the inferred body pose of another individual during an interaction directly benefits the pose estimates.
{ "cite_N": [ "@cite_61", "@cite_35", "@cite_47", "@cite_26", "@cite_4", "@cite_22", "@cite_37", "@cite_33", "@cite_21", "@cite_52", "@cite_56", "@cite_24", "@cite_19", "@cite_45", "@cite_59", "@cite_46", "@cite_13" ], "mid": [ "2461911683", "", "1970629627", "2198667788", "2742737904", "1940585053", "1906662973", "2963675021", "2149276562", "2443846596", "2136668269", "2204609240", "2165605600", "854053868", "2558630670", "2167626157", "2162762857" ], "abstract": [ "We aim to understand the dynamics of social interactions between two people by recognizing their actions and reactions using a head-mounted camera. Our work will impact several first-person vision tasks that need the detailed understanding of social interactions, such as automatic video summarization of group events and assistive systems. To recognize micro-level actions and reactions, such as slight shifts in attention, subtle nodding, or small hand actions, where only subtle body motion is apparent, we propose to use paired egocentric videos recorded by two interacting people. We show that the first-person and second-person points-of-view features of two people, enabled by paired egocentric videos, are complementary and essential for reliably recognizing micro-actions and reactions. We also build a new dataset of dyadic (two-persons) interactions that comprises more than 1000 pairs of egocentric videos to enable systematic evaluations on the task of micro-action and reaction recognition.", "", "Egocentric cameras can be used to benefit such tasks as analyzing fine motor skills, recognizing gestures and learning about hand-object manipulation. To enable such technology, we believe that the hands must detected on the pixel-level to gain important information about the shape of the hands and fingers. We show that the problem of pixel-wise hand detection can be effectively solved, by posing the problem as a model recommendation task. As such, the goal of a recommendation system is to recommend the n-best hand detectors based on the probe set - a small amount of labeled data from the test distribution. This requirement of a probe set is a serious limitation in many applications, such as ego-centric hand detection, where the test distribution may be continually changing. To address this limitation, we propose the use of virtual probes which can be automatically extracted from the test distribution. The key idea is that many features, such as the color distribution or relative performance between two detectors, can be used as a proxy to the probe set. In our experiments we show that the recommendation paradigm is well-equipped to handle complex changes in the appearance of the hands in first-person vision. In particular, we show how our system is able to generalize to new scenarios by testing our model across multiple users.", "We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .", "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet.", "In this paper, we present a new feature representation for first-person videos. In first-person video understanding (e.g., activity recognition), it is very important to capture both entire scene dynamics (i.e., egomotion) and salient local motion observed in videos. We describe a representation framework based on time series pooling, which is designed to ab] short-term long-term changes in feature descriptor elements. The idea is to keep track of how descriptor values are changing over time and summarize them to represent motion in the activity video. The framework is general, handling any types of per-frame feature descriptors including conventional motion descriptors like histogram of optical flows (HOF) as well as appearance descriptors from more recent convolutional neural networks (CNN). We experimentally confirm that our approach clearly outperforms previous feature representations including bag-of-visual-words and improved Fisher vector (IFV) when using identical underlying feature descriptors. We also confirm that our feature representation has superior performance to existing state-of-the-art features like local spatio-temporal features and Improved Trajectory Features (originally developed for 3rd-person videos) when handling first-person videos. Multiple first-person activity datasets were tested under various settings to confirm these findings.", "We tackle the problem of estimating the 3D pose of an individual's upper limbs (arms+hands) from a chest mounted depth-camera. Importantly, we consider pose estimation during everyday interactions with objects. Past work shows that strong pose+viewpoint priors and depth-based features are crucial for robust performance. In egocentric views, hands and arms are observable within a well defined volume in front of the camera. We call this volume an egocentric workspace. A notable property is that hand appearance correlates with workspace location. To exploit this correlation, we classify arm+hand configurations in a global egocentric coordinate frame, rather than a local scanning window. This greatly simplify the architecture and improves performance. We propose an efficient pipeline which 1) generates synthetic workspace exemplars for training using a virtual chest-mounted camera whose intrinsic parameters match our physical camera, 2) computes perspective-aware depth features on this entire volume and 3) recognizes discrete arm+hand pose classes through a sparse multi-class SVM. We achieve state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time.", "We present a model that uses a single first-person image to generate an egocentric basketball motion sequence in the form of a 12D camera configuration trajectory, which encodes a player's 3D location and 3D head orientation throughout the sequence. To do this, we first introduce a future convolutional neural network (CNN) that predicts an initial sequence of 12D camera configurations, aiming to capture how real players move during a one-on-one basketball game. We also introduce a goal verifier network, which is trained to verify that a given camera configuration is consistent with the final goals of real one-on-one basketball players. Next, we propose an inverse synthesis procedure to synthesize a refined sequence of 12D camera configurations that (1) sufficiently matches the initial configurations predicted by the future CNN, while (2) maximizing the output of the goal verifier network. Finally, by following the trajectory resulting from the refined camera configuration sequence, we obtain the complete 12D motion sequence. Our model generates realistic basketball motion sequences that capture the goals of real players, outperforming standard deep learning approaches such as recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs).", "We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.", "We presents a method for future localization: to predict plausible future trajectories of ego-motion in egocentric stereo images. Our paths avoid obstacles, move between objects, even turn around a corner into space behind objects. As a byproduct of the predicted trajectories, we discover the empty space occluded by foreground objects. One key innovation is the creation of an EgoRetinal map, akin to an illustrated tourist map, that 'rearranges' pixels taking into accounts depth information, the ground plane, and body motion direction, so that it allows motion planning and perception of objects on one image space. We learn to plan trajectories directly on this EgoRetinal map using first person experience of walking around in a variety of scenes. In a testing phase, given an novel scene, we find multiple hypotheses of future trajectories from the learned experience. We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories. We quantitatively evaluate our method to show predictive validity and apply to various real world daily activities including walking, shopping, and social interactions.", "We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.", "Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.", "We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”", "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89 , which outperforms the current state-of-the-art by 19 . Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2 accuracy, up by 24 from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.", "We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks.", "This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.", "Temporal segmentation of human motion into actions is central to the understanding and building of computational models of human motion and activity recognition. Several issues contribute to the challenge of temporal segmentation and classification of human motion. These include the large variability in the temporal scale and periodicity of human actions, the complexity of representing articulated motion, and the exponential nature of all possible movement combinations. We provide initial results from investigating two distinct problems -classification of the overall task being performed, and the more difficult problem of classifying individual frames over time into specific actions. We explore first-person sensing through a wearable camera and inertial measurement units (IMUs) for temporally segmenting human motion into actions and performing activity classification in the context of cooking and recipe preparation in a natural environment. We present baseline results for supervised and unsupervised temporal segmentation, and recipe recognition in the CMU-multimodal activity database (CMU-MMAC)." ] }
1904.09882
2937778953
The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the person's body is out of view for a typical wearable camera. We propose a learning-based approach to estimate the camera wearer's 3D body pose from egocentric video sequences. Our key insight is to leverage interactions with another person---whose body pose we can directly observe---as a signal inherently linked to the body pose of the first-person subject. We show that since interactions between individuals often induce a well-ordered series of back-and-forth responses, it is possible to learn a temporal model of the interlinked poses even though one party is largely out of view. We demonstrate our idea on a variety of domains with dyadic interaction and show the substantial impact on egocentric body pose estimation, which improves the state of the art. Video results are available at this http URL
* -0.15in Egocentric 3D full body pose estimation has received only limited attention @cite_63 @cite_57 @cite_58 . The first attempt to the problem is the geometry-based inside-out mocap" approach @cite_57 , which uses structure from motion (SfM) to reconstruct the 3D location of 16 body mounted cameras placed on a person's joints. In contrast, we propose a learning-based solution, and it requires only a single chest-mounted camera, which makes it more suitable and comfortable for daily activity.
{ "cite_N": [ "@cite_57", "@cite_58", "@cite_63" ], "mid": [ "2004447433", "2895641024", "2963791050" ], "abstract": [ "Motion capture technology generally requires that recordings be performed in a laboratory or closed stage setting with controlled lighting. This restriction precludes the capture of motions that require an outdoor setting or the traversal of large areas. In this paper, we present the theory and practice of using body-mounted cameras to reconstruct the motion of a subject. Outward-looking cameras are attached to the limbs of the subject, and the joint angles and root pose are estimated through non-linear optimization. The optimization objective function incorporates terms for image matching error and temporal continuity of motion. Structure-from-motion is used to estimate the skeleton structure and to provide initialization for the non-linear optimization procedure. Global motion is estimated and drift is controlled by matching the captured set of videos to reference imagery. We show results in settings where capture would be difficult or impossible with traditional motion capture systems, including walking outside and swinging on monkey bars. The quality of the motion reconstruction is evaluated by comparing our results against motion capture data produced by a commercially available optical system.", "Ego-pose estimation, i.e., estimating a person’s 3D pose with a single wearable camera, has many potential applications in activity monitoring. For these applications, both accurate and physically plausible estimates are desired, with the latter often overlooked by existing work. Traditional computer vision-based approaches using temporal smoothing only take into account the kinematics of the motion without considering the physics that underlies the dynamics of motion, which leads to pose estimates that are physically invalid. Motivated by this, we propose a novel control-based approach to model human motion with physics simulation and use imitation learning to learn a video-conditioned control policy for ego-pose estimation. Our imitation learning framework allows us to perform domain adaption to transfer our policy trained on simulation data to real-world data. Our experiments with real egocentric videos show that our method can estimate both accurate and physically plausible 3D ego-pose sequences without observing the cameras wearer’s body.", "Understanding the camera wearers activity is central to egocentric vision, yet one key facet of that activity is inherently invisible to the camera—the wearers body pose. Prior work focuses on estimating the pose of hands and arms when they come into view, but this 1) gives an incomplete view of the full body posture, and 2) prevents any pose estimate at all in many frames, since the hands are only visible in a fraction of daily life activities. We propose to infer the invisible pose of a person behind the egocentric camera. Given a single video, our efficient learning-based approach returns the full body 3D joint positions for each frame. Our method exploits cues from the dynamic motion signatures of the surrounding scene—which change predictably as a function of body pose—as well as static scene structures that reveal the viewpoint (e.g., sitting vs. standing). We further introduce a novel energy minimization scheme to infer the pose sequence. It uses soft predictions of the poses per time instant together with a non-parametric model of human pose dynamics over longer windows. Our method outperforms an array of possible alternatives, including typical deep learning approaches for direct pose regression from images." ] }
1904.09882
2937778953
The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the person's body is out of view for a typical wearable camera. We propose a learning-based approach to estimate the camera wearer's 3D body pose from egocentric video sequences. Our key insight is to leverage interactions with another person---whose body pose we can directly observe---as a signal inherently linked to the body pose of the first-person subject. We show that since interactions between individuals often induce a well-ordered series of back-and-forth responses, it is possible to learn a temporal model of the interlinked poses even though one party is largely out of view. We demonstrate our idea on a variety of domains with dyadic interaction and show the substantial impact on egocentric body pose estimation, which improves the state of the art. Video results are available at this http URL
More recently, two methods based on monocular first-person video have been proposed @cite_63 @cite_58 . The method in @cite_63 infers the poses of a camera wearer by using both homographies and static visual cues to optimize an implicit motion graph. The method in @cite_58 uses a humanoid simulator in a control-based approach to recover the sequence of actions affecting pose, and it is evaluated quantitatively only on synthetic sequences. Whereas both prior learning-based methods focus on sweeping motions that induce notable camera movements (like bending, sitting, walking, running), our approach improves the prediction of upper-body joint locations during sequences when the camera remains relatively still (like handshakes and other conversational gestures). Furthermore, unlike @cite_58 , our method does not require a simulator and does all its learning directly from video accompanied by ground truth ego-poses. Most importantly, unlike any of the existing methods @cite_63 @cite_57 @cite_58 , our approach discovers the connection between the dynamics in inter-person interactions and egocentric body poses.
{ "cite_N": [ "@cite_57", "@cite_58", "@cite_63" ], "mid": [ "2004447433", "2895641024", "2963791050" ], "abstract": [ "Motion capture technology generally requires that recordings be performed in a laboratory or closed stage setting with controlled lighting. This restriction precludes the capture of motions that require an outdoor setting or the traversal of large areas. In this paper, we present the theory and practice of using body-mounted cameras to reconstruct the motion of a subject. Outward-looking cameras are attached to the limbs of the subject, and the joint angles and root pose are estimated through non-linear optimization. The optimization objective function incorporates terms for image matching error and temporal continuity of motion. Structure-from-motion is used to estimate the skeleton structure and to provide initialization for the non-linear optimization procedure. Global motion is estimated and drift is controlled by matching the captured set of videos to reference imagery. We show results in settings where capture would be difficult or impossible with traditional motion capture systems, including walking outside and swinging on monkey bars. The quality of the motion reconstruction is evaluated by comparing our results against motion capture data produced by a commercially available optical system.", "Ego-pose estimation, i.e., estimating a person’s 3D pose with a single wearable camera, has many potential applications in activity monitoring. For these applications, both accurate and physically plausible estimates are desired, with the latter often overlooked by existing work. Traditional computer vision-based approaches using temporal smoothing only take into account the kinematics of the motion without considering the physics that underlies the dynamics of motion, which leads to pose estimates that are physically invalid. Motivated by this, we propose a novel control-based approach to model human motion with physics simulation and use imitation learning to learn a video-conditioned control policy for ego-pose estimation. Our imitation learning framework allows us to perform domain adaption to transfer our policy trained on simulation data to real-world data. Our experiments with real egocentric videos show that our method can estimate both accurate and physically plausible 3D ego-pose sequences without observing the cameras wearer’s body.", "Understanding the camera wearers activity is central to egocentric vision, yet one key facet of that activity is inherently invisible to the camera—the wearers body pose. Prior work focuses on estimating the pose of hands and arms when they come into view, but this 1) gives an incomplete view of the full body posture, and 2) prevents any pose estimate at all in many frames, since the hands are only visible in a fraction of daily life activities. We propose to infer the invisible pose of a person behind the egocentric camera. Given a single video, our efficient learning-based approach returns the full body 3D joint positions for each frame. Our method exploits cues from the dynamic motion signatures of the surrounding scene—which change predictably as a function of body pose—as well as static scene structures that reveal the viewpoint (e.g., sitting vs. standing). We further introduce a novel energy minimization scheme to infer the pose sequence. It uses soft predictions of the poses per time instant together with a non-parametric model of human pose dynamics over longer windows. Our method outperforms an array of possible alternatives, including typical deep learning approaches for direct pose regression from images." ] }
1904.10128
2941201280
In this paper, we investigate impacts of three main aspects of visual tracking, i.e., the backbone network, the attentional mechanism and the detection component, and propose a Siamese Attentional Keypoint Network, dubbed SATIN, to achieve efficient tracking and accurate localization. Firstly, a new Siamese lightweight hourglass network is specifically designed for visual tracking. It takes advantage of the benefits of the repeated bottom-up and top-down inference to capture more global and local contextual information at multiple scales. Secondly, a novel cross-attentional module is utilized to leverage both channel-wise and spatial intermediate attentional information, which enhance both discriminative and localization capabilities of feature maps. Thirdly, a keypoints detection approach is invented to track any target object by detecting the top-left corner point, the centroid point and the bottom-right corner point of its bounding box. To the best of our knowledge, we are the first to propose this approach. Therefore, our SATIN tracker not only has a strong capability to learn more effective object representations, but also computational and memory storage efficiency, either during the training or testing stage. Without bells and whistles, experimental results demonstrate that our approach achieves state-of-the-art performance on several recent benchmark datasets, at speeds far exceeding the frame-rate requirement.
In recent years, CNNs have made great progress in a wide range of computer vision applications due to their impressive representation abilities. Because of their surprisingly good performance of CNN on object classification and detection tasks, researchers are encouraged to either combine existing CNNs with DCF or design deep networks in a Siamese framework for high performance visual tracking. The most popular backbone networks utilized in recent trackers @cite_33 @cite_56 @cite_30 @cite_36 @cite_32 @cite_53 are AlexNet @cite_4 , VGGNet @cite_42 and ResNet @cite_37 . AlexNet @cite_4 consists of several convolutional and pooling layers, and it was the first large-scale CNN that had won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) @cite_31 . VGGNet @cite_42 stacks numerous small convolutional kernels without using pooling operations, which increases the representation power of the network while reducing the number of parameters. ResNet @cite_37 introduces the skip connection to learn residual information, which makes it more efficient and simple to design deeper architectures.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_4", "@cite_33", "@cite_36", "@cite_53", "@cite_42", "@cite_32", "@cite_56", "@cite_31" ], "mid": [ "", "2949650786", "", "", "2951584184", "2797693453", "1686810756", "2799058067", "2887556118", "2117539524" ], "abstract": [ "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "", "", "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "Compared with visible object tracking, thermal infrared (TIR) object tracking can track an arbitrary target in total darkness since it cannot be influenced by illumination variations. However, there are many unwanted attributes that constrain the potentials of TIR tracking, such as the absence of visual color patterns and low resolutions. Recently, structured output support vector machine (SOSVM) and discriminative correlation filter (DCF) have been successfully applied to visible object tracking, respectively. Motivated by these, in this paper, we propose a large margin structured convolution operator (LMSCO) to achieve efficient TIR object tracking. To improve the tracking performance, we employ the spatial regularization and implicit interpolation to obtain continuous deep feature maps, including deep appearance features and deep motion features, of the TIR targets. Finally, a collaborative optimization strategy is exploited to significantly update the operators. Our approach not only inherits the advantage of the strong discriminative capability of SOSVM but also achieves accurate and robust tracking with higher-dimensional features and more dense samples. To the best of our knowledge, we are the first to incorporate the advantages of DCF and SOSVM for TIR object tracking. Comprehensive evaluations on two thermal infrared tracking benchmarks, i.e. VOT-TIR2015 and VOT-TIR2016, clearly demonstrate that our LMSCO tracker achieves impressive results and outperforms most state-of-the-art trackers in terms of accuracy and robustness with sufficient frame rate.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges.", "Abstract Visual tracking algorithms based on structured output support vector machine (SOSVM) have demonstrated excellent performance. However, sampling methods and optimization strategies of SOSVM undesirably increase the computational overloads, which hinder real-time application of these algorithms. Moreover, due to the lack of high-dimensional features and dense training samples, SOSVM-based algorithms are unstable to deal with various challenging scenarios, such as occlusions and scale variations. Recently, visual tracking algorithms based on discriminative correlation filters (DCF), especially the combination of DCF and features from deep convolutional neural networks (CNN), have been successfully applied to visual tracking, and attains surprisingly good performance on recent benchmarks. The success is mainly attributed to two aspects: the circular correlation properties of DCF and the powerful representation capabilities of CNN features. Nevertheless, compared with SOSVM, DCF-based algorithms are restricted to simple ridge regression which has a weaker discriminative ability. In this paper, a novel circular and structural operator tracker (CSOT) is proposed for high performance visual tracking, it not only possesses the powerful discriminative capability of SOSVM but also efficiently inherits the superior computational efficiency of DCF. Based on the proposed circular and structural operators, a set of primal confidence score maps can be obtained by circular correlating feature maps with their corresponding structural correlation filters. Furthermore, an implicit interpolation is applied to convert the multi-resolution feature maps to the continuous domain and make all primal confidence score maps have the same spatial resolution. Then, we exploit an efficient ensemble post-processor based on relative entropy, which can coalesce primal confidence score maps and create an optimal confidence score map for more accurate localization. The target is localized on the peak of the optimal confidence score map. Besides, we introduce a collaborative optimization strategy to update circular and structural operators by iteratively training structural correlation filters, which significantly reduces computational complexity and improves robustness. Experimental results demonstrate that our approach achieves state-of-the-art performance in mean AUC scores of 71.5 and 69.4 on the OTB2013 and OTB2015 benchmarks respectively, and obtains a third-best expected average overlap (EAO) score of 29.8 on the VOT2017 benchmark.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1904.10128
2941201280
In this paper, we investigate impacts of three main aspects of visual tracking, i.e., the backbone network, the attentional mechanism and the detection component, and propose a Siamese Attentional Keypoint Network, dubbed SATIN, to achieve efficient tracking and accurate localization. Firstly, a new Siamese lightweight hourglass network is specifically designed for visual tracking. It takes advantage of the benefits of the repeated bottom-up and top-down inference to capture more global and local contextual information at multiple scales. Secondly, a novel cross-attentional module is utilized to leverage both channel-wise and spatial intermediate attentional information, which enhance both discriminative and localization capabilities of feature maps. Thirdly, a keypoints detection approach is invented to track any target object by detecting the top-left corner point, the centroid point and the bottom-right corner point of its bounding box. To the best of our knowledge, we are the first to propose this approach. Therefore, our SATIN tracker not only has a strong capability to learn more effective object representations, but also computational and memory storage efficiency, either during the training or testing stage. Without bells and whistles, experimental results demonstrate that our approach achieves state-of-the-art performance on several recent benchmark datasets, at speeds far exceeding the frame-rate requirement.
Moreover, there are some efficient backbone networks introduced by other vision tasks, such as hourglass networks @cite_58 and FlowNet @cite_35 . However, these networks sometimes are computational and memory expensive for practical computer vision applications. Meanwhile, some lightweight networks @cite_60 @cite_47 @cite_25 focus on designing more efficient architecture to reduce network computation while maintaining excellent performance. Unfortunately, these networks mentioned above are always pre-trained for object classification and detection. Trackers that employ these networks may obtain suboptimal tracking results. The recent trend in visual tracking @cite_39 @cite_63 @cite_36 is to design suitable networks for learning object- or domain-specific representations and enhance the generalization power to new video sequences. Different from these tracking methods, we aim to design a lightweight CNN as the backbone network, which can learn more contextual features at multiple scales with a simple architecture and fewer parameters.
{ "cite_N": [ "@cite_35", "@cite_60", "@cite_36", "@cite_39", "@cite_63", "@cite_47", "@cite_58", "@cite_25" ], "mid": [ "", "2612445135", "2951584184", "1857884451", "", "2279098554", "2950762923", "2951583185" ], "abstract": [ "", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.", "", "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters." ] }
1904.09527
2937602321
Greyscale image colorization for applications in image restoration has seen significant improvements in recent years. Many of these techniques that use learning-based methods struggle to effectively colorize sparse inputs. With the consistent growth of the anime industry, the ability to colorize sparse input such as line art can reduce significant cost and redundant work for production studios by eliminating the in-between frame colorization process. Simply using existing methods yields inconsistent colors between related frames resulting in a flicker effect in the final video. In order to successfully automate key areas of large-scale anime production, the colorization of line arts must be temporally consistent between frames. This paper proposes a method to colorize line art frames in an adversarial setting, to create temporally coherent video of large anime by improving existing image to image translation methods. We show that by adding an extra condition to the generator and discriminator, we can effectively create temporally consistent video sequences from anime line arts. Code and models available at: this https URL
Image to image translation using conditional GANs @cite_7 @cite_26 is especially effective in comparison to CNN-based models for colorization tasks @cite_2 . This model successfully maps a high dimensional input to a high dimensional output using a U-Net @cite_22 based generator and patch-based discriminator @cite_9 . The closer the input image is to the target, the better the learned mapping is. As a result, this technique is particularly suited to colorization tasks. The U-Net architecture acts as an encoder decoder to produce images conditioned on some input. The limitation with U-Net is the information bottleneck that results from downsampling and then upsampling an input image. Skip connections copy mirrored layers in the encoder to the decoder, but downsampling to 2x2 can lose information. This is especially relevant when considering the sparse nature of line art in comparison to greyscale images. Downsampling input data that is already sparse to that extent should be avoided in the context of anime line art colorization due to the risk of data loss.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_9", "@cite_2" ], "mid": [ "2788549480", "1901129140", "2099471712", "", "2792021479" ], "abstract": [ "Recent work (, 2017) suggests that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. Motivated by this, we study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs). We find that this Jacobian generally becomes ill-conditioned at the beginning of training. Moreover, we find that the average (with z from p(z)) conditioning of the generator is highly predictive of two other ad-hoc metrics for measuring the 'quality' of trained GANs: the Inception Score and the Frechet Inception Distance (FID). We test the hypothesis that this relationship is causal by proposing a 'regularization' technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets. It also greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "Over the last decade, the process of automatic image colorization has been of significant interest for several application areas including restoration of aged or degraded images. This problem is highly ill-posed due to the large degrees of freedom during the assignment of color information. Many of the recent developments in automatic colorization involve images that contain a common theme or require highly processed data such as semantic maps as input. In our approach, we attempt to fully generalize the colorization procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN). The network is trained over datasets that are publicly available such as CIFAR-10 and Places365. The results between the generative model and traditional deep neural networks are compared." ] }
1904.09527
2937602321
Greyscale image colorization for applications in image restoration has seen significant improvements in recent years. Many of these techniques that use learning-based methods struggle to effectively colorize sparse inputs. With the consistent growth of the anime industry, the ability to colorize sparse input such as line art can reduce significant cost and redundant work for production studios by eliminating the in-between frame colorization process. Simply using existing methods yields inconsistent colors between related frames resulting in a flicker effect in the final video. In order to successfully automate key areas of large-scale anime production, the colorization of line arts must be temporally consistent between frames. This paper proposes a method to colorize line art frames in an adversarial setting, to create temporally coherent video of large anime by improving existing image to image translation methods. We show that by adding an extra condition to the generator and discriminator, we can effectively create temporally consistent video sequences from anime line arts. Code and models available at: this https URL
The neural algorithm for artistic style presented by Gatys al @cite_8 provides a method for the creation of artistic imagery. This is highly relevant as it demonstrates a way to learn representations of both content and style between two images using the pretrained VGG network @cite_6 , then transfer that learned representation with iterative updates on a target image. Johnson al @cite_17 showed that this model is optimal for transferring a learned representation of style from a painting which includes encoding texture information to an input photo. Although this method alone is not effective in transferring color to inputs like line art for our specific task, the ability to learn and differentiate style and content using a pretrained network is highly useful @cite_11 .
{ "cite_N": [ "@cite_11", "@cite_17", "@cite_6", "@cite_8" ], "mid": [ "2885325429", "2331128040", "2962835968", "2567130809" ], "abstract": [ "While implicit generative models such as GANs have shown impressive results in high quality image reconstruction and manipulation using a combination of various losses, we consider a simpler approach leading to surprisingly strong results. We show that texture loss [1] alone allows the generation of perceptually high quality images. We provide a better understanding of texture constraining mechanism and develop a novel semantically guided texture constraining method for further improvement. Using a recently developed perceptual metric employing “deep features” and termed LPIPS [2], the method obtains state-of-the-art results. Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features. Using texture information, off-the-shelf deep classification networks (without training) perform as well as the best performing (tuned and calibrated) LPIPS metrics.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "" ] }
1904.09568
2940343960
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
The pipeline of image based reconstruction goes as follows. First, feature detection is performed for individual image and feature matching is performed for image pair @cite_42 . When performing feature matching, usually vocabulary tree @cite_0 @cite_16 is used to index target images with high similarity, and fast library for approximate nearest neighbors (FLANN) @cite_46 is employed to search approximately nearest feature neighbors. By this way, the efficiency of image matching procedure is largely improved. Then, SfM procedure @cite_25 @cite_12 @cite_34 @cite_4 is performed on the pair-wise point matches to estimate the camera poses and triangulate the sparse scene points. Next, multi-view stereo (MVS) @cite_44 @cite_3 @cite_8 is performed based on registered cameras to get dense point cloud. And finally, image based surface reconstruction @cite_28 @cite_22 is performed on the point cloud to obtain detailed surface mesh. Though with many advantages, the image based method is vulnerable to illumination variation, low textures and complicated structures. What is more, inevitable mismatching and error accumulation usually lead to scene drifting.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_8", "@cite_28", "@cite_42", "@cite_34", "@cite_3", "@cite_0", "@cite_44", "@cite_46", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "", "2519267527", "2069293739", "2621352645", "2151103935", "2471962767", "2074378519", "2128017662", "2129404737", "2086504823", "2592811474", "2199898507", "1499715726" ], "abstract": [ "", "We propose a robust uncalibrated multiview photometric stereo method for high quality 3D shape reconstruction. In our method, a coarse initial 3D mesh obtained using a multiview stereo method is projected onto a 2D planar domain using a planar mesh parameterization technique. We describe methods for surface normal estimation that work in the parameterized 2D space that jointly incorporates all geometric and photometric cues from multiple viewpoints. Using an estimated surface normal map, a refined 3D mesh is then recovered by computing an optimal displacement map in the same 2D planar domain. Our method avoids the need of merging view-dependent surface normal maps that is often required in conventional methods. We conduct evaluation on various real-world objects containing surfaces with specular reflections, multiple albedos, and complex topologies in both controlled and uncontrolled settings and demonstrate that accurate 3D meshes with fine geometric details can be recovered by our method.", "Depth-map merging based 3D modeling is an effective approach for reconstructing large-scale scenes from multiple images. In addition to generate high quality depth maps at each image, how to select suitable neighboring images for each image is also an important step in the reconstruction pipeline, unfortunately to which little attention has been paid in the literature untill now. This paper is intended to tackle this issue for large scale scene reconstruction where many unordered images are captured and used with substantial varying scale and view-angle changes. We formulate the neighboring image selection as a combinatorial optimization problem and use the quantum-inspired evolutionary algorithm to seek its optimal solution. Experimental results on the ground truth data set show that our approach can significantly improve the quality of the depth-maps as well as final 3D reconstruction results with high computational efficiency.", "We present a variational approach for surface reconstruction from a set of oriented points with scale information. We focus particularly on scenarios with nonuniform point densities due to images taken from different distances. In contrast to previous methods, we integrate the scale information in the objective and globally optimize the signed distance function of the surface on a balanced octree grid. We use a finite element discretization on the dual structure of the octree minimizing the number of variables. The tetrahedral mesh is generated efficiently with a lookup table which allows to map octree cells to the nodes of the finite elements. We optimize memory efficiency by data aggregation, such that robust data terms can be used even on very large scenes. The surface normals are explicitly optimized and used for surface extraction to improve the reconstruction at edges and corners.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.", "In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.", "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.", "For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.", "Spatial verification is a crucial part of every image retrieval system, as it accounts for the fact that geometric feature configurations are typically ignored by the Bag-of-Words representation. Since spatial verification quickly becomes the bottleneck of the retrieval process, runtime efficiency is extremely important. At the same time, spatial verification should be able to reliably distinguish between related and unrelated images. While methods based on RANSAC’s hypothesize-and-verify framework achieve high accuracy, they are not particularly efficient. Conversely, verification approaches based on Hough voting are extremely efficient but not as accurate. In this paper, we develop a novel spatial verification approach that uses an efficient voting scheme to identify promising transformation hypotheses that are subsequently verified and refined. Through comprehensive experiments, we show that our method is able to achieve a verification accuracy similar to state-of-the-art hypothesize-and-verify approaches while providing faster runtimes than state-of-the-art voting-based methods.", "Global structure-from-motion (SfM) methods solve all cameras simultaneously from all available relative motions. It has better potential in both reconstruction accuracy and computation efficiency than incremental methods. However, global SfM is challenging, mainly because of two reasons. Firstly, translation averaging is difficult, since an essential matrix only tells the direction of relative translation. Secondly, it is also hard to filter out bad essential matrices due to feature matching failures. We propose to compute a sparse depth image at each camera to solve both problems. Depth images help to upgrade an essential matrix to a similarity transformation, which can determine the scale of relative translation. Thus, camera registration is formulated as a well-posed similarity averaging problem. Depth images also make the filtering of essential matrices simple and effective. In this way, translation averaging can be solved robustly in two convex L1 optimization problems, which reach the global optimum rapidly. We demonstrate this method in various examples including sequential data, Internet data, and ambiguous data with repetitive scene structures.", "One of the potentially effective means for large-scale 3D scene reconstruction is to reconstruct the scene in a global manner, rather than incrementally, by fully exploiting available auxiliary information on the imaging condition, such as camera location by Global Positioning System (GPS), orientation by inertial measurement unit (or compass), focal length from EXIF, and so on. However, such auxiliary information, though informative and valuable, is usually too noisy to be directly usable. In this paper, we present an approach by taking advantage of such noisy auxiliary information to improve structure from motion solving. More specifically, we introduce two effective iterative global optimization algorithms initiated with such noisy auxiliary information. One is a robust rotation averaging algorithm to deal with contaminated epipolar graph, the other is a robust scene reconstruction algorithm to deal with noisy GPS data for camera centers initialization. We found that by exclusively focusing on the estimated inliers at the current iteration, the optimization process initialized by such noisy auxiliary information could converge well and efficiently. Our proposed method is evaluated on real images captured by unmanned aerial vehicle, StreetView car, and conventional digital cameras. Extensive experimental results show that our method performs similarly or better than many of the state-of-art reconstruction approaches, in terms of reconstruction accuracy and completeness, but is more efficient and scalable for large-scale image data sets." ] }
1904.09568
2940343960
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
[rgb] 1,0,0 In addition, there are several methods proposed planning camera network either in off-line @cite_26 @cite_35 or on-line @cite_43 scheme, which are mainly used for aerial image capturing. These methods focus on how to completely cover the scenes with minimum image overlap and flight time. However, in this paper, we do not seek for the optimal image capturing locations but only try to properly cover the scene with ground and aerial images.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_43" ], "mid": [ "2963054668", "2971669705", "2963052560" ], "abstract": [ "Drones equipped with cameras are emerging as a powerful tool for large-scale aerial 3D scanning, but existing automatic flight planners do not exploit all available information about the scene, and can therefore produce inaccurate and incomplete 3D models. We present an automatic method to generate drone trajectories, such that the imagery acquired during the flight will later produce a high-fidelity 3D model. Our method uses a coarse estimate of the scene geometry to plan camera trajectories that: (1) cover the scene as thoroughly as possible; (2) encourage observations of scene geometry from a diverse set of viewing angles; (3) avoid obstacles; and (4) respect a user-specified flight time budget. Our method relies on a mathematical model of scene coverage that exhibits an intuitive diminishing returns property known as submodularity. We leverage this property extensively to design a trajectory planning algorithm that reasons globally about the non-additive coverage reward obtained across a trajectory, jointly with the cost of traveling between views. We evaluate our method by using it to scan three large outdoor scenes, and we perform a quantitative evaluation using a photorealistic video game simulator.", "", "Image-based modeling techniques [1]–[3] can now generate photo-realistic 3D models from images. But it is up to users to provide high quality images with good coverage and view overlap, which makes the data capturing process tedious and time consuming. We seek to automate data capturing for image-based modeling. The core of our system is an iterative linear method to solve the multi-view stereo (MVS) problem quickly and plan the Next-Best-View (NBV) effectively. Our fast MVS algorithm enables online model reconstruction and quality assessment to determine the NBVs on the fly. We test our system with a toy unmanned aerial vehicle (UAV) in simulated, indoor and outdoor experiments. Results show that our system improves the efficiency of data acquisition and ensures the completeness of the final model." ] }
1904.09568
2940343960
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
However, due to limitation in scanning viewpoint and inconvenience in data collection, the completeness of scene coverage for laser scanning based methods is hard to guarantee. As a result, several methods are proposed achieving a complete scene reconstruction from laser point clouds. Self-similar structures @cite_30 or simple building blocks @cite_48 are exploited to reconstruct complete scenes (buildings or facades) from incomplete laser scans. Other methods @cite_37 @cite_18 reconstruct scenes from laser scans based on Manhattan-world assumption. Though quite impressive reconstructions could be achieved by the methods above, they either require user interaction @cite_30 @cite_48 or based on strong assumption @cite_37 @cite_18 , which makes their scalability poor.
{ "cite_N": [ "@cite_30", "@cite_48", "@cite_37", "@cite_18" ], "mid": [ "1995050439", "2050195189", "1976952084", "2519823314" ], "abstract": [ "Recent advances in scanning technologies, in particular devices that extract depth through active sensing, allow fast scanning of urban scenes. Such rapid acquisition incurs imperfections: large regions remain missing, significant variation in sampling density is common, and the data is often corrupted with noise and outliers. However, buildings often exhibit large scale repetitions and self-similarities. Detecting, extracting, and utilizing such large scale repetitions provide powerful means to consolidate the imperfect data. Our key observation is that the same geometry, when scanned multiple times over reoccurrences of instances, allow application of a simple yet effective non-local filtering. The multiplicity of the geometry is fused together and projected to a base-geometry defined by clustering corresponding surfaces. Denoising is applied by separating the process into off-plane and in-plane phases. We show that the consolidation of the reoccurrences provides robust denoising and allow reliable completion of missing parts. We present evaluation results of the algorithm on several LiDAR scans of buildings of varying complexity and styles.", "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.", "We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.", "Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods." ] }
1904.09568
2940343960
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
In addition, in order to completely cover the scene with as few laser scans as possible, several methods are proposed dealing with the issue of optimal terrestrial laser scanner network design @cite_53 @cite_39 @cite_55 @cite_29 . These methods are based on existing 2D building map @cite_53 @cite_55 @cite_29 or 3D object model @cite_39 . When performing optimization, several factors are considered. For example, range and incidence angles constraints @cite_53 , sufficient overlap and surface topography @cite_39 between laser scans, or multi-scale and hierarchical viewpoint planning @cite_29 . However, in this paper, laser scans are served as supplements of the images and their locations are planned based on the SfM result. As a result, textural richness and structural complexity of the scene are considered when performing laser scanning location planning here. By this way, accurate and complete reconstruction could be achieved.
{ "cite_N": [ "@cite_55", "@cite_53", "@cite_29", "@cite_39" ], "mid": [ "2756075896", "2097397777", "2806791586", "2434639067" ], "abstract": [ "Abstract. The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn’t guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.", "One of the main applications of the terrestrial laser scanner is the visualization, modeling and monitoring of man-made structures like buildings. Especially surveying applications require on one hand a quickly obtainable, high resolution point cloud but also need observations with a known and well described quality. To obtain a 3D point cloud, the scene is scanned from different positions around the considered object. The scanning geometry plays an important role in the quality of the resulting point cloud. The ideal set-up for scanning a surface of an object is to position the laser scanner in such a way that the laser beam is near perpendicular to the surface. Due to scanning conditions, such an ideal set-up is in practice not possible. The different incidence angles and ranges of the laser beam on the surface result in 3D points of varying quality. The stand-point of the scanner that gives the best accuracy is generally not known. Using an optimal stand-point of the laser scanner on a scene will improve the quality of individual point measurements and results in a more uniform registered point cloud. The design of an optimum measurement setup is defined such that the optimum stand-points are identified to fulfill predefined quality requirements and to ensure a complete spatial coverage. The additional incidence angle and range constraints on the visibility from a view point ensure that individual scans are not affected by bad scanning geometry effects. A complex and large room that would normally require five view point to be fully covered, would require nineteen view points to obtain full coverage under the range and incidence angle constraints.", "Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the “optimality” of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a “brute force” search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the “brute force” strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 time saving.", "Abstract. Despite the enormous popularity of terrestrial laser scanners in the field of Geodesy, economic aspects in the context of data acquisition are mostly considered intuitively. In contrast to established acquisition techniques, such as tacheometry and photogrammetry, optimisation of the acquisition configuration cannot be conducted based on assumed object coordinates, as these would change in dependence to the chosen viewpoint. Instead, a combinatorial viewpoint planning algorithm is proposed that uses a given 3D-model as an input and simulates laser scans based on predefined viewpoints. The method determines a suitably small subset of viewpoints from which the sampled object surface is preferably large. An extension of the basic algorithm is proposed that only considers subsets of viewpoints that can be registered to a common dataset. After exemplification of the method, the expected acquisition time in the field is estimated based on computed viewpoint plans." ] }
1904.09568
2940343960
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
Some works propose registering 2D images with 3D laser scans by utilizing low level (point or line) @cite_1 @cite_15 @cite_47 or high level (plane) @cite_10 features, by which the 3D laser points can be textured from the registered 2D images. Based on the registered 2D images and 3D laser scans, Li @cite_40 propose fusing images and laser points by leveraging their respective advantages to get a complete, textured and regularized urban facade reconstruction. In addition, in the communities of photogrammetry @cite_33 , computer vision @cite_14 @cite_7 and computer graphics @cite_50 , several benchmarks contain both images and laser scans are proposed for reconstruction method evaluation. However, the laser scans are mostly served as ground truths which are relatively independent to the images. There are several methods @cite_20 @cite_17 @cite_11 which have similar motivation with ours, integrating images and laser scans for complete scene reconstruction. These methods are based on 3D-3D registration, which is performed using either GCPs @cite_20 or ICP algorithm @cite_17 @cite_11 . In comparison, our approach is based on image synthesis and matching. There is no 3D-level large dissimilarity in density, accuracy and completeness, thus a more accurate merging is achieved by our proposed method.
{ "cite_N": [ "@cite_47", "@cite_14", "@cite_11", "@cite_33", "@cite_7", "@cite_1", "@cite_40", "@cite_50", "@cite_15", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2066886972", "2115362620", "2028525317", "2090020933", "2741885505", "2126960283", "2108841317", "2738551266", "2014137629", "2136755135", "2062045664", "" ], "abstract": [ "Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.", "In this paper we want to start the discussion on whether image based 3D modelling techniques can possibly be used to replace LIDAR systems for outdoor 3D data acquisition. Two main issues have to be addressed in this context: (i) camera calibration (internal and external) and (ii) dense multi-view stereo. To investigate both, we have acquired test data from outdoor scenes both with LIDAR and cameras. Using the LIDAR data as reference we estimated the ground-truth for several scenes. Evaluation sets are prepared to evaluate different aspects of 3D model building. These are: (i) pose estimation and multi-view stereo with known internal camera parameters; (ii) camera calibration and multi-view stereo with the raw images as the only input and (iii) multi-view stereo.", "Abstract. Three-dimensional (3D) models of historical buildings are created for documentation and virtual realization of them. Laser scanning and photogrammetry are extensively used to perform for these aims. The selection of the method that will be used in threedimensional modelling study depends on the scale and shape of the object, and also applicability of the method. Laser scanners are high cost instruments. However, the cameras are low cost instruments. The off-the-shelf cameras are used for taking the photogrammetric images. The camera is imaging the object details by carrying on hand while the laser scanner makes ground based measurement. Laser scanner collect high density spatial data in a short time from the measurement area. On the other hand, image based 3D (IB3D) measurement uses images to create 3D point cloud data. The image matching and the creation of the point cloud can be done automatically. Historical buildings include more complex details. Thus, all details cannot be measured by terrestrial laser scanner (TLS) due to the blocking the details with each others. Especially, the artefacts which have complex shapes cannot be measured in full details. They cause occlusion on the point cloud model. However it is possible to record photogrammetric images and creation IB3D point cloud for these areas. Thus the occlusion free 3D model is created by the integration of point clouds originated from the TLS and photogrammetric images. In this study, usability of laser scanning in conjunction with image based modelling for creation occlusion free three-dimensional point cloud model of historical building was evaluated. The IB3D point cloud was created in the areas that could not been measured by TLS. Then laser scanning and IB3D point clouds were integrated in the common coordinate system. The registration point clouds were performed with the iterative closest point (ICP) and georeferencing methods. Accuracy of the registration was evaluated by convergency and its standard deviations for the ICP and residuals on the control points for the georeferencing method.", "Airborne high resolution oblique imagery systems and RPAS UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative “ISPRS benchmark for multi-platform photogrammetry”, run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.", "Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http: www.eth3d.net.", "The photorealistic modeling of large-scale objects, such as urban scenes, requires the combination of range sensing technology and digital photography. In this paper, we attack the key problem of camera pose estimation, in an automatic and efficient way. First, the camera orientation is recovered by matching vanishing points (extracted from 2D images) with 3D directions (derived from a 3D range model). Then, a hypothesis-and-test algorithm computes the camera positions with respect to the 3D range model by matching corresponding 2D and 3D linear features. The camera positions are further optimized by minimizing a line-to-line distance. The advantage of our method over earlier work has to do with the fact we do not need to rely on extracted planar facades, or other higher-order features; we are utilizing low- level linear features. That makes this method more general, robust, and efficient. Our method can also be enhanced by the incorporation of traditional structure-from-motion algorithms. We have also developed a user-interface for allowing users to accurately texture-map 2D images onto 3D range models at interactive rates. We have tested our system in a large variety of urban scenes.", "We present a method for fusing two acquisition modes, 2D photographs and 3D LiDAR scans, for depth-layer decomposition of urban facades. The two modes have complementary characteristics: point cloud scans are coherent and inherently 3D, but are often sparse, noisy, and incomplete; photographs, on the other hand, are of high resolution, easy to acquire, and dense, but view-dependent and inherently 2D, lacking critical depth information. In this paper we use photographs to enhance the acquired LiDAR data. Our key observation is that with an initial registration of the 2D and 3D datasets we can decompose the input photographs into rectified depth layers. We decompose the input photographs into rectangular planar fragments and diffuse depth information from the corresponding 3D scan onto the fragments by solving a multi-label assignment problem. Our layer decomposition enables accurate repetition detection in each planar layer, using which we propagate geometry, remove outliers and enhance the 3D scan. Finally, the algorithm produces an enhanced, layered, textured model. We evaluate our algorithm on complex multi-planar building facades, where direct autocorrelation methods for repetition detection fail. We demonstrate how 2D photographs help improve the 3D scans by exploiting data redundancy, and transferring high level structural information to (plausibly) complete large missing regions.", "We present a benchmark for image-based 3D reconstruction. The benchmark sequences were acquired outside the lab, in realistic conditions. Ground-truth data was captured using an industrial laser scanner. The benchmark includes both outdoor scenes and indoor environments. High-resolution video sequences are provided as input, supporting the development of novel pipelines that take advantage of video input to increase reconstruction fidelity. We report the performance of many image-based 3D reconstruction pipelines on the new benchmark. The results point to exciting challenges and opportunities for future work.", "This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called \"textured range image\", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional \"range\" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.", "We are building a system that can automatically acquire 3D range scans and 2D images to build geometrically correct, texture mapped 3D models of urban environments. This paper deals with the problem of automatically registering the 3D range scans with images acquired at other times and with unknown camera calibration and location. The method involves the utilization of parallelism and orthogonality constraints that naturally exist in urban environments. We present results for building a texture mapped 3-D model of an urban building.", "Abstract. Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.", "" ] }
1904.09843
2939666461
Hand gestures form an intuitive means of interaction in Mixed Reality (MR) applications. However, accurate gesture recognition can be achieved only through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are generally computationally expensive and obtaining real-time performance on-device is still a challenge. To this end, we propose a novel lightweight hand gesture recognition framework that works in First Person View for wearable devices. The models are trained on a GPU machine and ported on an Android smartphone for its use with frugal wearable devices such as the Google Cardboard and VR Box. The proposed hand gesture recognition framework is driven by a cascade of state-of-the-art deep learning models: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. We extensively evaluate the framework on our EgoGestAR dataset. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80 on EgoGestAR video dataset with an average latency of only 0.12 s.
The efficacy of hand gestures as an interaction modality for MR applications on smartphones and HMDs has been extensively explored in the past @cite_4 . Marker-based finger tracking @cite_23 has been established as an effective way of directly manipulating objects in MR applications. However, most of the work has been based either on skin colour or on hand-crafted features for hand segmentation and interest point detection which is followed by optical flow for tracking.
{ "cite_N": [ "@cite_4", "@cite_23" ], "mid": [ "2051495970", "2058352122" ], "abstract": [ "The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.", "This paper presents a technique for natural, fingertip-based interaction with virtual objects in Augmented Reality (AR) environments. We use image processing software and finger- and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enable the user to feel virtual objects. Unlike previous AR interfaces, this approach allows users to interact with virtual content using natural hand gestures. The paper describes how these techniques were applied in an urban planning interface, and also presents preliminary informal usability results." ] }
1904.09843
2939666461
Hand gestures form an intuitive means of interaction in Mixed Reality (MR) applications. However, accurate gesture recognition can be achieved only through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are generally computationally expensive and obtaining real-time performance on-device is still a challenge. To this end, we propose a novel lightweight hand gesture recognition framework that works in First Person View for wearable devices. The models are trained on a GPU machine and ported on an Android smartphone for its use with frugal wearable devices such as the Google Cardboard and VR Box. The proposed hand gesture recognition framework is driven by a cascade of state-of-the-art deep learning models: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. We extensively evaluate the framework on our EgoGestAR dataset. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80 on EgoGestAR video dataset with an average latency of only 0.12 s.
Accurate hand segmentation is very important in all First-Person View (FPV) gesture recognition applications. In early attempts, it was observed that the @math colour space allows better clustering of hand skin pixel data @cite_14 . @cite_17 , super-pixels with several features are extracted using SLIC algorithm for computing hand segmentation masks. @cite_7 observed the response of filters to examine local appearance features in skin colour regions. Most of these approaches are faced by the following challenges: movement of the hand relative to the HMD renders the hand blurry, which makes it difficult to detect and track it, thereby impeding classification accuracy. sudden changes in illumination conditions and the presence of skin-like colours and texture in the background causes algorithms with skin feature dependency to fail.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_17" ], "mid": [ "1539870437", "2063362452", "2050709211" ], "abstract": [ "The emergence of new pervasive wearable technologies (e.g. action cameras and smart glasses) calls attention to the so called First Person Vision (FPV). In the future, more and more everyday-life videos will be shot from a first-person point of view, overturning the classical fixed-camera understanding of Vision, specializing the existing knowledge of moving cameras and bringing new challenges in the field of video processing. The trend in research is going to be oriented towards a new type of computer vision, centred on moving sensors and driven by the need for new applications for wearable devices. We identify hand tracking and gesture recognition as an essential topic in this field, motivated by the simple realization that we often look at our hand, even while performing the simplest tasks in everyday life. In addition, the next frontier in user interfaces are hands-free devices. In this work we argue that applications based on FPV may involve information fusion at various complexity and abstraction levels, ranging from pure image processing to inference over patterns. We address the lowest, by proposing a first investigation on hand detection from a first-person point of view sensor and some preliminary results obtained fusing colour and optic flow information.", "We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting hand regions in ego-centric videos is a critical step for understanding hand-object manipulation and analyzing hand-eye coordination. However, in contrast to traditional applications of hand detection, such as gesture interfaces or sign-language recognition, ego-centric videos present new challenges such as rapid changes in illuminations, significant camera motion and complex hand-object manipulations. To quantify the challenges and performance in this new domain, we present a fully labeled indoor outdoor ego-centric hand detection benchmark dataset containing over 200 million labeled pixels, which contains hand images taken under various illumination conditions. Using both our dataset and a publicly available ego-centric indoors dataset, we give extensive analysis of detection performance using a wide range of local appearance features. Our analysis highlights the effectiveness of sparse features and the importance of modeling global illumination. We propose a modeling strategy based on our findings and show that our model outperforms several baseline approaches.", "We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and extend the dense trajectories approach that has been successfully introduced for action recognition. Dense features are extracted around regions selected by a new hand segmentation technique that integrates superpixel classification, temporal and spatial coherence. We extensively testour gesture recognition and segmentation algorithms on public datasets and propose a new dataset shot with a wearable camera. In addition, we demonstrate that our solution can work in near real-time on a wearable device." ] }
1904.09843
2939666461
Hand gestures form an intuitive means of interaction in Mixed Reality (MR) applications. However, accurate gesture recognition can be achieved only through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are generally computationally expensive and obtaining real-time performance on-device is still a challenge. To this end, we propose a novel lightweight hand gesture recognition framework that works in First Person View for wearable devices. The models are trained on a GPU machine and ported on an Android smartphone for its use with frugal wearable devices such as the Google Cardboard and VR Box. The proposed hand gesture recognition framework is driven by a cascade of state-of-the-art deep learning models: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. We extensively evaluate the framework on our EgoGestAR dataset. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80 on EgoGestAR video dataset with an average latency of only 0.12 s.
To this end, we look at utilising the current state-of-the-art object detection architectures like MobileNetV2 @cite_16 , YOLOv2 @cite_27 and Faster R-CNN @cite_2 for hand detection. Recently, a Faster R-CNN based hand detector was proposed @cite_20 . They used a cascaded CNN approach for jointly detecting the hand and the key point using colour space information. A dual-target CNN takes input from the Faster R-CNN and localises the index fingertip and the finger-joint.
{ "cite_N": [ "@cite_27", "@cite_16", "@cite_20", "@cite_2" ], "mid": [ "2951433694", "2963163009", "2513258067", "2613718673" ], "abstract": [ "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.", "With the heated trend of augmented reality (AR) and popularity of smart head-mounted devices, the development of natural human device interaction is important, especially the hand gesture based interaction. This paper presents a solution for the point gesture based interaction in the egocentric vision and its application. Firstly, a dataset named EgoFinger is established focusing on the pointing gesture for the egocentric vision. We discuss the dataset collection detail and as well the comprehensive analysis of this dataset, including background and foreground color distribution, hand occurrence likelihood, scale and pointing angle distribution of hand and finger, and the manual labeling error analysis. The analysis shows that the dataset covers substantial data samples in various environments and dynamic hand shapes. Furthermore, we propose a two-stage Faster R-CNN based hand detection and dual-target fingertip detection framework. Comparing with state-of-art tracking and detection algorithm, it performs the best in both hand and fingertip detection. With the large-scale dataset, we achieve fingertip detection error at about 12.22 pixels in 640px × 480px video frame. Finally, using the fingertip detection result, we design and implement an input system for the egocentric vision, i.e., Ego-Air-Writing. By considering the fingertip as a pen, the user with wearable glass can write character in the air and interact with system using simple hand gestures.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1904.09843
2939666461
Hand gestures form an intuitive means of interaction in Mixed Reality (MR) applications. However, accurate gesture recognition can be achieved only through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are generally computationally expensive and obtaining real-time performance on-device is still a challenge. To this end, we propose a novel lightweight hand gesture recognition framework that works in First Person View for wearable devices. The models are trained on a GPU machine and ported on an Android smartphone for its use with frugal wearable devices such as the Google Cardboard and VR Box. The proposed hand gesture recognition framework is driven by a cascade of state-of-the-art deep learning models: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. We extensively evaluate the framework on our EgoGestAR dataset. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80 on EgoGestAR video dataset with an average latency of only 0.12 s.
There are many classification approaches proposed in the context of hand gesture recognition. DTW and HMM based classifiers @cite_9 have been used with stereo camera setup to recognise third-person view gestures. Support Vector Machines have also been explored for hand gesture recognition via bag-of-features @cite_22 . All such classifiers work well given a small set of sufficiently distinct gestures but fail to extract discriminative features as one scales up to large datasets containing gestures with high inter-class similarly.
{ "cite_N": [ "@cite_9", "@cite_22" ], "mid": [ "2086223995", "2119656522" ], "abstract": [ "This paper presents a comparison of two real-time hand gesture recognition systems. One system utilizes a binocular stereo camera set-up while the other system utilizes a combination of a depth camera and an inertial sensor. The latter system is a dual-modality system as it utilizes two different types of sensors. These systems have been previously developed in the Signal and Image Processing Laboratory at the University of Texas at Dallas and the details of the algorithms deployed in these systems are reported in previous papers. In this paper, a comparison is carried out between these two real-time systems in order to examine which system performs better for the same set of hand gestures under realistic conditions.", "This paper presents a novel and real-time system for interaction with an application or video game via hand gestures. Our system includes detecting and tracking bare hand in cluttered background using skin detection and hand posture contour comparison algorithm after face subtraction, recognizing hand gestures via bag-of-features and multiclass support vector machine (SVM) and building a grammar that generates gesture commands to control an application. In the training stage, after extracting the keypoints for every training image using the scale invariance feature transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multiclass SVM to build the training classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using our algorithm, then, the keypoints are extracted for every small image that contains the detected hand gesture only and fed into the cluster model to map them into a bag-of-words vector, which is finally fed into the multiclass SVM training classifier to recognize the hand gesture." ] }
1904.09843
2939666461
Hand gestures form an intuitive means of interaction in Mixed Reality (MR) applications. However, accurate gesture recognition can be achieved only through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are generally computationally expensive and obtaining real-time performance on-device is still a challenge. To this end, we propose a novel lightweight hand gesture recognition framework that works in First Person View for wearable devices. The models are trained on a GPU machine and ported on an Android smartphone for its use with frugal wearable devices such as the Google Cardboard and VR Box. The proposed hand gesture recognition framework is driven by a cascade of state-of-the-art deep learning models: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. We extensively evaluate the framework on our EgoGestAR dataset. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80 on EgoGestAR video dataset with an average latency of only 0.12 s.
Several works @cite_25 use depth information from sensors such as the Microsoft Kinect that restricts its applications in head mounted devices. Moreover, most depth sensors perform poorly in the presence of strong specular reflection, direct sunlight, incandescent light and outdoor environments due to the presence of infrared radiation @cite_24 . On-device gesture recognition is especially challenging due to the limited sensors present on a smartphone. A recent work @cite_5 uses a thermographic camera for detecting the infrared radiation from the hand. This method needs additional hardware in the form of an infrared transducer. To the best of our knowledge, ours is the first attempt to make an on-device gesture classification framework for mobile devices.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_25" ], "mid": [ "1913661415", "2805869227", "2075156252" ], "abstract": [ "With the introduction of the Microsoft Kinect for Windows v2 (Kinect v2), an exciting new sensor is available to robotics and computer vision researchers. Similar to the original Kinect, the sensor is capable of acquiring accurate depth images at high rates. This is useful for robot navigation as dense and robust maps of the environment can be created. Opposed to the original Kinect working with the structured light technology, the Kinect v2 is based on the time-of-flight measurement principle and might also be used outdoors in sunlight. In this paper, we evaluate the application of the Kinect v2 depth sensor for mobile robot navigation. The results of calibrating the intrinsic camera parameters are presented and the minimal range of the depth sensor is examined. We analyze the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, we introduce empirically derived noise models for the Kinect v2 sensor in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the Kinect v2 depth images for a variety of applications.", "Gesture recognition systems for detecting gesture commands in light conditions and in dark conditions including a computing system having a processor and a thermographic camera configured to detect infrared radiation from a gesture made by a user and communicate gesture image information to the processor for carrying out a computer-readable gesture command are shown and described. In some examples, the computing system and the thermographic camera are supported on an eyewear article frame. In some other examples, the computing system and the thermographic camera are components of a mobile device. In even other examples, the computing system and the thermographic camera are components of a desk top computer or a laptop computer.", "We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model." ] }
1904.09757
2938908220
This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics.
Non-local Operations. Most traditional filters (such as Gaussian and mean) process the data locally, by using a weighted average of spatially neighboring pixels. It usually produces over-smoothed reconstructions. Classical non-local methods for image restoration problems (e.g., low-rank modeling @cite_23 , joint sparsity @cite_9 and non-local means @cite_7 ) have shown their superior efficiency for quality improvement by exploiting non-local correlations. Recently, non-local operations haven been included into the deep neural networks (DNN) for video classification @cite_15 , image restoration (e.g., denoising, artifacts removal and super-resolution) @cite_19 @cite_10 , etc, with significant performance improvement reported. It is also worth to point out that non-local operations have been applied in other scenarios, such as intra block copy in screen content extension of the High-Efficiency Video Coding (HEVC) @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_19", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "2509267076", "2097073572", "2536599074", "2962737939", "2048695508", "2101700394", "" ], "abstract": [ "With the emerging applications such as online gaming and Wi-Fi display, screen content video, including computer generated text, graphics and animations, becomes more popular than ever. Traditional video coding technologies typically were developed based on models that fit into natural, camera-captured video. The distinct characteristics exhibited between these two types of contents necessitate the exploration of coding efficiency improvement given that new tools can be developed specially for screen content video. The HEVC Screen Content Coding Extensions (HEVC SCC) have been developed to incorporate such new coding tools in order to achieve better compression efficiency. In this paper, intra block copy (IBC, or intra picture block compensation), also named current picture referencing (CPR) in HEVC SCC, is introduced and discussed. This tool is very efficient for coding of screen content video in that repeated patterns in text and graphics rich content occur frequently within the same picture. Having a previously reconstructed block with equal or similar pattern as a predictor can effectively reduce the prediction error and therefore improve coding efficiency. Simulation results show that up to 50 BD rate reduction in all intra coding can be achieved with intra block copy enabled, compared to the HEVC reference encoder without this tool. Significant BD rate reductions for other coding configurations are also observed.", "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the self-similarities of natural images has led to the successful non-local means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.", "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters.", "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.", "", "" ] }