aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1705.00609 | 2611292810 | In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation. | Due to the unavailability of labels in the target domain, one commonly used strategy of UDA is to learn domain invariant representation via minimizing the domain distribution discrepancy. Maximum Mean Discrepancy (MMD) is an effective non-parametric metric for comparing the distributions based on two sets of data @cite_2 . Given two distributions @math and @math , by mapping the data to a reproducing kernel Hilbert space (RKHS) using function @math , the MMD between @math and @math is defined as, where @math denotes the expectation with regard to the distribution , and @math defines a set of functions in the unit ball of a RKHS @math . Based on the statistical tests defined by MMD, we have @math iff @math . Denote by @math and @math two sets of samples drawn from the distributions @math and @math , respectively. An empirical estimate of MMD can be given by @cite_33 , where @math denotes the feature map associated with the kernel map @math . @math is usually defined as the convex combination of @math basis kernels @math @cite_9 , | {
"cite_N": [
"@cite_9",
"@cite_33",
"@cite_2"
],
"mid": [
"2951670162",
"2212660284",
"2164943005"
],
"abstract": [
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.",
"Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. Availability: Contact: [email protected]"
]
} |
1705.00609 | 2611292810 | In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation. | Besides MMD, there are several other metrics for measuring domain discrepancy. Baktashmotlagh al @cite_38 propose a distribution-matching embedding (DME) approach for UDA, where both MMD and the Hellinger distance are adopted to measure the discrepancy between the source and target distributions. Instead of embedding of distributions, discriminative methods such as domain classification @cite_21 and domain confusion @cite_31 have also been introduced to learn domain invariant representation. However, class weight bias is also not yet considered in these methods. | {
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_21"
],
"mid": [
"2514499445",
"2953226914",
"1882958252"
],
"abstract": [
"Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Distribution-Matching Embedding approach: An unsupervised domain adaptation method that overcomes this issue by mapping the data to a latent space where the distance between the empirical distributions of the source and target examples is minimized. In other words, we seek to extract the information that is invariant across the source and target data. In particular, we study two different distances to compare the source and target distributions: the Maximum Mean Discrepancy and the Hellinger distance. Furthermore, we show that our approach allows us to learn either a linear embedding, or a nonlinear one. We demonstrate the benefits of our approach on the tasks of visual object recognition, text categorization, and WiFi localization.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets."
]
} |
1705.00609 | 2611292810 | In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation. | Several sample reweighting or selection methods @cite_41 @cite_23 are similar to our weighted MMD in spirit, and have been proposed to match the source and target distributions. These methods aim to learn sample-specific weights or select appropriate source samples for target data. Different from them, our proposed weighted MMD alleviates class weight bias by assigning class-specific weights to source data. | {
"cite_N": [
"@cite_41",
"@cite_23"
],
"mid": [
"2158815628",
"2811380766"
],
"abstract": [
"Learning domain-invariant features is of vital importance to unsupervised domain adaptation, where classifiers trained on the source domain need to be adapted to a different target domain for which no labeled examples are available. In this paper, we propose a novel approach for learning such features. The central idea is to exploit the existence of landmarks, which are a subset of labeled data instances in the source domain that are distributed most similarly to the target domain. Our approach automatically discovers the landmarks and use them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks. The solutions of those auxiliary tasks form the basis to compose invariant features for the original task. We show how this composition can be optimized discriminatively without requiring labels from the target domain. We validate the method on standard benchmark datasets for visual object recognition and sentiment analysis of text. Empirical results show the proposed method outperforms the state-of-the-art significantly.",
"We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice."
]
} |
1705.00634 | 2611928039 | The problem of measuring the true incremental effectiveness of a digital advertising campaign is of increasing importance to marketers. With a large and increasing percentage of digital advertising delivered via Demand-Side-Platforms (DSPs) executing campaigns via Real-Time-Bidding (RTB) auctions and programmatic approaches, a measurement solution that satisfies both advertiser concerns and the constraints of a DSP is of particular interest. MediaMath (a DSP) has developed the first practical, statistically sound randomization-based methodology for causal ad effectiveness (or Ad Lift) measurement by a DSP (or similar digital advertising execution system that may not have full control over the advertising transaction mechanisms). We describe our solution and establish its soundness within the causal framework of counterfactuals and potential outcomes, and present a Gibbs-sampling procedure for estimating confidence intervals around the estimated Ad Lift. We also address practical complications (unique to the digital advertising setting) that stem from the fact that digital advertising is targeted and measured via identifiers (e.g., cookies, mobile advertising IDs) that may not be stable over time. One such complication is the repeated occurrence of identifiers, leading to interference among observations. Another is due to the possibility of multiple identifiers being associated with the same consumer, leading to "contamination" with some of their identifiers being assigned to the Treatment group and others to the Control group. Complications such as these have severely impaired previous efforts to derive accurate measurements of lift in practice. In contrast to a few other papers on the subject, this paper has an expository aim as well, and provides a rigorous, self-contained, and readily-implementable treatment of all relevant concepts. | We note that in the category of observational approaches, there has been recent interest in applying Machine Learning techniques to estimate the @math (or Individual Treatment Effect, @math ) as a function of covariates (or features) of an individual entity. For example, @cite_2 @cite_8 present a decision-tree based approach to estimate the @math ; @cite_20 show how counterfactuals can be estimated by learning representations (using linear models or deep neural networks) that encourage similarity (or balance) between Test and Control populations; @cite_9 introduce the concept of "Deep Instrumental Variables Networks" which are able to predict the counterfactual and hence estimate causal effects. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_20",
"@cite_2"
],
"mid": [
"",
"2567931609",
"2389937032",
"2148771208"
],
"abstract": [
"",
"We are in the middle of a remarkable rise in the use and capability of artificial intelligence. Much of this growth has been fueled by the success of deep learning architectures: models that map from observables to outputs via multiple layers of latent representations. These deep learning algorithms are effective tools for unstructured prediction, and they can be combined in AI systems to solve complex automated reasoning problems. This paper provides a recipe for combining ML algorithms to solve for causal effects in the presence of instrumental variables -- sources of treatment randomization that are conditionally independent from the response. We show that a flexible IV specification resolves into two prediction tasks that can be solved with deep neural nets: a first-stage network for treatment prediction and a second-stage network whose loss function involves integration over the conditional treatment distribution. This Deep IV framework imposes some specific structure on the stochastic gradient descent routine used for training, but it is general enough that we can take advantage of off-the-shelf ML capabilities and avoid extensive algorithm customization. We outline how to obtain out-of-sample causal validation in order to avoid over-fit. We also introduce schemes for both Bayesian and frequentist inference: the former via a novel adaptation of dropout training, and the latter via a data splitting routine.",
"Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, \"Would this patient have lower blood sugar had she received a different medication?\". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.",
"In this paper we study the problems of estimating heterogeneity in causal eects in experimental or observational studies and conducting inference about the magnitude of the dierences in treatment eects across subsets of the population. In applications, our method provides a data-driven approach to determine which subpopulations have large or small treatment eects and to test hypotheses about the dierences in these eects. For experiments, our method allows researchers to identify heterogeneity in treatment eects that was not specied in a pre-analysis plan, without concern about invalidating inference due to multiple testing. In most of the literature on supervised machine learning (e.g. regression trees, random forests, LASSO, etc.), the goal is to build a model of the relationship between a unit’s attributes and an observed outcome. A prominent role in these methods is played by cross-validation which compares predictions to actual outcomes in test samples, in order to select the level of complexity of the model that provides the best predictive power. Our method is closely related, but it diers in that it is tailored for predicting causal eects of a treatment rather than a unit’s outcome. The challenge is that the truth\" for a causal eect is not observed for any individual unit: we observe the unit with the treatment,"
]
} |
1705.00634 | 2611928039 | The problem of measuring the true incremental effectiveness of a digital advertising campaign is of increasing importance to marketers. With a large and increasing percentage of digital advertising delivered via Demand-Side-Platforms (DSPs) executing campaigns via Real-Time-Bidding (RTB) auctions and programmatic approaches, a measurement solution that satisfies both advertiser concerns and the constraints of a DSP is of particular interest. MediaMath (a DSP) has developed the first practical, statistically sound randomization-based methodology for causal ad effectiveness (or Ad Lift) measurement by a DSP (or similar digital advertising execution system that may not have full control over the advertising transaction mechanisms). We describe our solution and establish its soundness within the causal framework of counterfactuals and potential outcomes, and present a Gibbs-sampling procedure for estimating confidence intervals around the estimated Ad Lift. We also address practical complications (unique to the digital advertising setting) that stem from the fact that digital advertising is targeted and measured via identifiers (e.g., cookies, mobile advertising IDs) that may not be stable over time. One such complication is the repeated occurrence of identifiers, leading to interference among observations. Another is due to the possibility of multiple identifiers being associated with the same consumer, leading to "contamination" with some of their identifiers being assigned to the Treatment group and others to the Control group. Complications such as these have severely impaired previous efforts to derive accurate measurements of lift in practice. In contrast to a few other papers on the subject, this paper has an expository aim as well, and provides a rigorous, self-contained, and readily-implementable treatment of all relevant concepts. | Two papers present the idea of using Gibbs Sampling in the context of causal analysis: @cite_16 outline a method to apply Gibbs sampling to causal inference in the presence of non-compliance, and @cite_19 present a similar method, applied to advertising campaign effectiveness measurement. Neither of these papers give a complete specification that can be easily implemented. Our Gibbs Sampling procedure is similar to these but we have introduced important simplifications, and specified the approach in a manner that is self-contained and easy to implement. | {
"cite_N": [
"@cite_19",
"@cite_16"
],
"mid": [
"2047272082",
"1741592568"
],
"abstract": [
"In this paper, we develop an experimental analysis to estimate the causal effect of online marketing campaigns as a whole, and not just the media ad design. We analyze the causal effects on user conversion probability. We run experiments based on A B testing to perform this evaluation. We also estimate the causal effect of the media ad design given this randomization approach. We discuss the framework of a marketing campaign in the context of targeted display advertising, and incorporate the main elements of this framework in the evaluation. We consider budget constraints, the auction process, and the targeting engine in the analysis and the experimental set up. For the effects of this evaluation, we assume the targeting engine to be a black box that incorporates the impression delivery policy, the budget constraints, and the bidding process. Our method to disaggregate the campaign causal analysis is inspired on randomized experiments with imperfect compliance and the intention-to-treat (ITT) analysis. In this framework, individuals assigned randomly to the study group might refuse to take the treatment. For estimation, we present a Bayesian approach and provide credible intervals for the causal estimates. We analyze the effects of 2 independent campaigns for different products from the Advertising.com ad network for 20M+ users each campaign.",
"We describe a computer program to assist a clinician with assessing the efficacy of treatments in experimental studies for which treatment assignment is random but subject compliance is imperfect. The major difficulty in such studies is that treatment efficacy is not \"identifiable\", that is, it cannot be estimated from the data, even when the number of subjects is infinite, unless additional knowledge is provided. Our system combines Bayesian learning with Gibbs sampling using two inputs: (1) the investigator's prior probabilities of the relative sizes of subpopulations and (2) the observed data from the experiment. The system outputs a histogram depicting the posterior distribution of the average treatment effect, that is, the probability that the average outcome (e.g., survival) would attain a given level, had the treatment been taken uniformly by the entire population. This paper describes the theoretical basis for the proposed approach and presents experimental results on both simulated and real data, showing agreement with the theoretical asymptotic bounds."
]
} |
1705.00717 | 2156821272 | Facebook is today the most popular social network with more than one billion subscribers worldwide. To provide good quality of service (e.g., low access delay) to their clients, FB relies on Akamai, which provides a worldwide content distribution network with a large number of edge servers that are much closer to FB subscribers. In this article we aim to depict a global picture of the current FB network infrastructure deployment taking into account both native FB servers and Akamai nodes. Toward this end, we have performed a measurement-based analysis during a period of two weeks using 463 Planet- Lab nodes distributed across 41 countries. Based on the obtained data we compare the average access delay that nodes in different countries experience accessing both native FB servers and Akamai nodes. In addition, we obtain a wide view of the deployment of Akamai nodes serving FB users worldwide. Finally, we analyze the geographical coverage of those nodes, and demonstrate that in most of the cases Akamai nodes located in a particular country service not only local FB subscribers, but also FB users located in nearby countries. | There also exist some research works that carried out different performance analyses on Facebook Services. Authors in @cite_7 look at the established connections when FB users login in the system. In particular, they identify different sections in the FB wall page of a user, and analyze how the information filling those sections is retrieved. An earlier work from 2010 @cite_4 identified some performance degradation (e.g., delay, packet losses, etc.) for some users accessing FB from outside US. Finally, we have found another interesting study @cite_9 that states that photo view is the most critical service for FB, and presents a detailed description on how FB photos are distributed to CDN Akamai servers. However, it does not perform a geographical analysis to understand how different regions of the world are being served as we do in our article. | {
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_7"
],
"mid": [
"1236737278",
"2149965524",
"2061433860"
],
"abstract": [
"This paper describes Haystack, an object storage system optimized for Facebook's Photos application. Facebook currently stores over 260 billion images, which translates to over 20 petabytes of data. Users upload one billion new photos (∼60 terabytes) each week and Facebook serves over one million images per second at peak. Haystack provides a less expensive and higher performing solution than our previous approach, which leveraged network attached storage appliances over NFS. Our key observation is that this traditional design incurs an excessive number of disk operations because of metadata lookups. We carefully reduce this per photo metadata so that Haystack storage machines can perform all metadata lookups in main memory. This choice conserves disk operations for reading actual data and thus increases overall throughput.",
"Online Social Networks (OSN) are fun, popular, and socially significant. An integral part of their success is the immense size of their global user base. To provide a consistent service to all users, Facebook, the world's largest OSN, is heavily dependent on centralized U.S. data centers, which renders service outside of the U.S. sluggish and wasteful of Internet bandwidth. In this paper, we investigate the detailed causes of these two problems and identify mitigation opportunities. Because details of Facebook's service remain proprietary, we treat the OSN as a black box and reverse engineer its operation from publicly available traces. We find that contrary to current wisdom, OSN state is amenable to partitioning and that its fine grained distribution and processing can significantly improve performance without loss in service consistency. Through simulations of reconstructed Facebook traffic over measured Internet paths, we show that user requests can be processed 79 faster and use 91 less bandwidth. We conclude that the partitioning of OSN state is an attractive scaling strategy for Facebook and other OSN services.",
"Facebook is one of the most famous social network sites hosting a number of users approaching a billion [1]. Facebook is a cloud-based web site which contains a number of advanced technologies behind the scene. Despite the fact that it is the web site that most users open for all-day and, in some places, all-night long, and its traffic is generally part of all types of networks — wired and wireless, PAN, LAN, MAN, and WAN —, there is virtually no research work or technical paper that report on client's perspective of what happen when users login onto their Facebook homepages. How many and in what order components loaded are, which and how many servers those components loaded are, or how many TCP streams used, are examples of questions which users and network administrators have a little knowledge of. Our work tries to answer the questions by examining every of over 2,000 packets per a single Facebook homepage retrieval using the Wireshark packet capturing software. From the captured packets, we thoroughly examined all objects that Facebook retrieved and categorized them into groups based on the characteristics. Our investigations found that Facebook retrieves these objects from both international and domestic servers, and over a number of parallel TCP streams. The results also showed that the Facebook traffic exhibited a two-spike bursty pattern, regardless of the loading time."
]
} |
1705.00615 | 2610326870 | Energy-efficiency is highly desirable for sensing systems in the Internet of Things (IoT). A common approach to achieve low-power systems is duty-cycling, where components in a system are turned off periodically to meet an energy budget. However, this work shows that such an approach is not necessarily optimal in energy-efficiency, and proposes as a fundamentally better alternative. The proposed approach offers 1) explicit modeling of performance uncertainties in system internals, 2) a realistic resource consumption model, and 3) a key insight into the superiority of guided-processing over duty-cycling. Generalization from the cascade structure to the more general graph-based one is also presented. Once applied to optimize a large-scale audio sensing system with a practical detection application, empirical results show that the proposed approach significantly improves the detection performance (up to @math and @math reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. | It is worthwhile to note that the cascade detection system of interest here is different from the serial detector network in the distributed detection literature @cite_3 @cite_1 @cite_24 , in which the decision of a current module is treated as an extra observation, instead of as a control signal to subsequent modules and conserve resources. | {
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_3"
],
"mid": [
"2069595316",
"2042286431",
"2162408301"
],
"abstract": [
"The problem of distributed detection involving N sensors is considered. The configuration of sensors is serial in the sense that the Jth sensor decides using the decision it receives along with its own observation. When each sensor uses the Neyman-Pearson test, the probability of detection is maximized for a given probability of false alarm, at the Nth stage. With two sensors, the serial scheme has a performance better than or equal to the parallel fusion scheme analyzed in the literature. Numerical examples illustrate the global optimization by the selection of operating thresholds at the sensors. >",
"Distributed detection on the serial (or tandem) topology is considered with the probability of error performance criterion. Previously published efforts, while presenting probability of detection versus false alarm results, limited the number of array elements to two or three. For the detection of known, equally likely signals in additive, symmetric noise, the author presents simple recursive expressions for the threshold values and the performance of the system. Examples for known signals in Gaussian and Laplacian noise show the degradation in performance due to the array structure. >",
"A distributed binary detection problem with binary communications is considered, wherein the nodes (sensors and decision-makers) of the system are organized in a series configuration. It is shown that this problem can be reformulated as a deterministic multistage nonlinear optimal control problem. The necessary and sufficient conditions of optimality using Bayes' risk as the optimization criterion are then derived, and a physical interpretation of how the costates relate to the decision threshold at each node is provided. Using the min-H method of optimal control theory, a computationally efficient algorithm with linear complexity in the number of nodes per iteration is proposed to solve for the optimal decision strategy. The algorithm is then extended to solve a Neyman-Pearson version of the problem to obtain the optimal team (network) receiver operating characteristic curve. Two easily implemented suboptimal decision rules termed the asymptotic decision strategy and the constant control strategy are proposed and their properties are investigated. >"
]
} |
1705.00615 | 2610326870 | Energy-efficiency is highly desirable for sensing systems in the Internet of Things (IoT). A common approach to achieve low-power systems is duty-cycling, where components in a system are turned off periodically to meet an energy budget. However, this work shows that such an approach is not necessarily optimal in energy-efficiency, and proposes as a fundamentally better alternative. The proposed approach offers 1) explicit modeling of performance uncertainties in system internals, 2) a realistic resource consumption model, and 3) a key insight into the superiority of guided-processing over duty-cycling. Generalization from the cascade structure to the more general graph-based one is also presented. Once applied to optimize a large-scale audio sensing system with a practical detection application, empirical results show that the proposed approach significantly improves the detection performance (up to @math and @math reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. | The cascade architecture is prevalent in many inference applications, with the most widely-known example being the seminal work in face detection by Viola and Jones @cite_10 . In @cite_10 , the system of cascaded detection modules is used to quickly discard many negative sub-images typically observed in face-detection applications, thus significantly speeding up the detection process. However, the cascade is not optimized in @cite_10 , leaving the optimal classifiers' parameters, both thresholds and weights, to be desired. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2164598857"
],
"abstract": [
"This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection."
]
} |
1705.00615 | 2610326870 | Energy-efficiency is highly desirable for sensing systems in the Internet of Things (IoT). A common approach to achieve low-power systems is duty-cycling, where components in a system are turned off periodically to meet an energy budget. However, this work shows that such an approach is not necessarily optimal in energy-efficiency, and proposes as a fundamentally better alternative. The proposed approach offers 1) explicit modeling of performance uncertainties in system internals, 2) a realistic resource consumption model, and 3) a key insight into the superiority of guided-processing over duty-cycling. Generalization from the cascade structure to the more general graph-based one is also presented. Once applied to optimize a large-scale audio sensing system with a practical detection application, empirical results show that the proposed approach significantly improves the detection performance (up to @math and @math reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. | In stream mining, @cite_17 employed a cascade of Gaussian mixture model (GMM)-based classifiers and formulated a problem to find both the number of mixture components and the threshold in each classifier that maximize the system detection rate subject to constraints on false alarm, memory and CPU. The solution in @cite_17 takes a person-by-person approach where it iterates between 1) finding optimal numbers of mixture components, i.e. resource allocation, for all classifiers given thresholds, and 2) finding optimal thresholds for a given resource allocation. However, this approach failed to capture the direct dependence of the cascade's resource consumption on its thresholds, and is inherently suboptimal. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2158368213"
],
"abstract": [
"Networks of classifiers are capturing the attention of system and algorithmic researchers because they offer improved accuracy over single model classifiers, can be distributed over a network of servers for improved scalability, and can be adapted to available system resources. This work provides a principled approach for the optimized allocation of system resources across a networked chain of classifiers. We begin with an illustrative example of how complex classification tasks can be decomposed into a network of binary classifiers. We formally define a global performance metric by recursively collapsing the chain of classifiers into one combined classifier. The performance metric trades off the end-to-end probabilities of detection and false alarm, both of which depend on the resources allocated to each individual classifier. We formulate the optimization problem and present optimal resource allocation results for both simulated and state-of-the-art classifier chains operating on telephony data."
]
} |
1705.00615 | 2610326870 | Energy-efficiency is highly desirable for sensing systems in the Internet of Things (IoT). A common approach to achieve low-power systems is duty-cycling, where components in a system are turned off periodically to meet an energy budget. However, this work shows that such an approach is not necessarily optimal in energy-efficiency, and proposes as a fundamentally better alternative. The proposed approach offers 1) explicit modeling of performance uncertainties in system internals, 2) a realistic resource consumption model, and 3) a key insight into the superiority of guided-processing over duty-cycling. Generalization from the cascade structure to the more general graph-based one is also presented. Once applied to optimize a large-scale audio sensing system with a practical detection application, empirical results show that the proposed approach significantly improves the detection performance (up to @math and @math reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. | On the other hand, a resource-consumption model closely related to ours was considered by in @cite_20 . The minor difference is that, instead of being proposed, our model was derived from first principles. However, @cite_20 formulated the problem using the empirical risk minimization framework, since it was assumed that probabilistic models of high-dimensional features cannot be estimated. We take a different approach where it is assumed that probabilistic models of features be estimated, by first reducing the features' dimensionality. In other words, the inputs into our algorithm are (probabilistic) models, not a dataset as in @cite_20 . In addition, the solution proposed in @cite_20 is a convex linear-program, which requires a convex relaxation (with an upper-bounding convex surrogate function) of the true objective function. In contrast, our solution is a dynamic program and requires no relaxation. | {
"cite_N": [
"@cite_20"
],
"mid": [
"118463236"
],
"abstract": [
"We present a convex framework to learn sequential decisions and apply it to the problem of learning under a budget. We consider the structure proposed in [1], where sensor measurements are acquired in a sequence. The goal after acquiring each new measurement is to make a decision whether to stop and classify or to pay the cost of using the next sensor in the sequence. We introduce a novel formulation of an empirical risk objective for the multi stage sequential decision problem. This objective naturally lends itself to a non-convex multilinear formulation. Nevertheless, we derive a novel perspective that leads to a tight convex objective. This is accomplished by expressing the empirical risk in terms of linear superposition of indicator functions. We then derive an LP formulation by utilizing hinge loss surrogates. Our LP achieves or exceeds the empirical performance of the nonconvex alternating algorithm that requires a large number of random initializations. Consequently, the LP has the advantage of guaranteed convergence, global optimality, repeatability and computation eciency."
]
} |
1705.00487 | 2610510746 | Most recent CNN architectures use average pooling as a final feature encoding step. In the field of fine-grained recognition, however, recent global representations like bilinear pooling offer improved performance. In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training. In addition, we present a novel way to visualize decisions made by these approaches. We identify parts of training images having the highest influence on the prediction of a given test image. It allows for justifying decisions to users and also for analyzing the influence of semantic parts. For example, we can show that the higher capacity VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity VGG-M model when recognizing fine-grained bird categories. Both contributions allow us to analyze the difference when moving between average and bilinear pooling. In addition, experiments show that our generalized approach can outperform both across a variety of standard datasets. | The presented @math -pooling is related to other pooling techniques, which aggregate a set of local features into a single feature vector. Besides the commonly used average pooling, fully-connected layers, and maximum pooling, several new approaches have been developed in the last years. Zeiler al @cite_9 randomly pick in each channel an element according to a multinomial distribution, which is defined by the activations themselves. Motivated by their success with hand-crafted features, Fisher vector @cite_21 @cite_13 and VLAD encoding @cite_11 applied on top of the last convolutional layer have been evaluated as well. The idea of spatial pyramids was used by He al @cite_10 in order to improve recognition performance. In contrast to these techniques, feature encoding based on @math -pooling show a significantly higher accuracy in fine-grained applications. Lin al @cite_13 @cite_1 presents bilinear pooling, which is a special case of average pooling. It has its largest benefits in fine-grained tasks. As shown in the experiments, learning the right mix of average and bilinear pooling improves results especially in tasks besides fine-grained. | {
"cite_N": [
"@cite_13",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"1907282891",
"1898560071",
"2244861693",
"2179352600",
"1524680991"
],
"abstract": [
"",
"We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.",
"Scaling up fine-grained recognition to all domains of fine-grained objects is a challenge the computer vision community will need to face in order to realize its goal of recognizing all object categories. Current state-of-the-art techniques rely heavily upon the use of keypoint or part annotations, but scaling up to hundreds or thousands of domains renders this annotation cost-prohibitive for all but the most important categories. In this work we propose a method for fine-grained recognition that uses no part annotations. Our method is based on generating parts using co-segmentation and alignment, which we combine in a discriminative mixture. Experimental results show its efficacy, demonstrating state-of-the-art results even when compared to methods that use part annotations during training.",
"A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent general-purpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at this http URL",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets."
]
} |
1705.00487 | 2610510746 | Most recent CNN architectures use average pooling as a final feature encoding step. In the field of fine-grained recognition, however, recent global representations like bilinear pooling offer improved performance. In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training. In addition, we present a novel way to visualize decisions made by these approaches. We identify parts of training images having the highest influence on the prediction of a given test image. It allows for justifying decisions to users and also for analyzing the influence of semantic parts. For example, we can show that the higher capacity VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity VGG-M model when recognizing fine-grained bird categories. Both contributions allow us to analyze the difference when moving between average and bilinear pooling. In addition, experiments show that our generalized approach can outperform both across a variety of standard datasets. | The relationship of average pooling and pairwise matching of local features was presented by Bo al @cite_18 as an efficient encoding for matching a set of local features. This formulation was also briefly discussed in @cite_12 and used for deriving an explicit feature transformation, which approximates bilinear pooling. Bilinear encoding was first mentioned by Tenenbaum al @cite_7 and used, for example, by Carreira al @cite_3 and Lin al @cite_13 for image recognition tasks. Furthermore, the recent work of Murray al @cite_4 also analyzes orderless pooling approaches and proposes a technique to normalize the contribution of each local descriptor to resulting kernel values. In contrast, we show how the individual contributions can be used either for visualizing the classification decisions and for understanding the differences between generic and fine-grained tasks. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_13",
"@cite_12"
],
"mid": [
"2142422094",
"2528191822",
"2170653751",
"78159342",
"",
"2261271299"
],
"abstract": [
"In visual recognition, the images are frequently modeled as unordered collections of local features (bags). We show that bag-of-words representations commonly used in conjunction with linear classifiers can be viewed as special match kernels, which count 1 if two local features fall into the same regions partitioned by visual words and 0 otherwise. Despite its simplicity, this quantization is too coarse, motivating research into the design of match kernels that more accurately measure the similarity between local features. However, it is impractical to use such kernels for large datasets due to their significant computational cost. To address this problem, we propose efficient match kernels (EMK) that map local features to a low dimensional feature space and average the resulting vectors to form a set-level feature. The local feature maps are learned so their inner products preserve, to the best possible, the values of the specified kernel function. Classifiers based on EMK are linear both in the number of images and in the number of local features. We demonstrate that EMK are extremely efficient and achieve the current state of the art in three difficult computer vision datasets: Scene-15, Caltech-101 and Caltech-256.",
"We consider the design of an image representation that embeds and aggregates a set of local descriptors into a single vector. Popular representations of this kind include the bag-of-visual-words, the Fisher vector and the VLAD. When two such image representations are compared with the dot-product, the image-to-image similarity can be interpreted as a match kernel. In match kernels, one has to deal with interference , i.e., with the fact that even if two descriptors are unrelated, their matching score may contribute to the overall similarity. We formalise this problem and propose two related solutions, both aimed at equalising the individual contributions of the local descriptors in the final representation. These methods modify the aggregation stage by including a set of per-descriptor weights. They differ by the objective function that is optimised to compute those weights. The first is a “democratisation” strategy that aims at equalising the relative importance of each descriptor in the set comparison metric. The second one involves equalising the match of a single descriptor to the aggregated vector. These concurrent methods give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks.",
"Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
"",
"Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets."
]
} |
1705.00487 | 2610510746 | Most recent CNN architectures use average pooling as a final feature encoding step. In the field of fine-grained recognition, however, recent global representations like bilinear pooling offer improved performance. In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training. In addition, we present a novel way to visualize decisions made by these approaches. We identify parts of training images having the highest influence on the prediction of a given test image. It allows for justifying decisions to users and also for analyzing the influence of semantic parts. For example, we can show that the higher capacity VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity VGG-M model when recognizing fine-grained bird categories. Both contributions allow us to analyze the difference when moving between average and bilinear pooling. In addition, experiments show that our generalized approach can outperform both across a variety of standard datasets. | We propose a novel generalization of the common average and bilinear pooling as used in deep networks, which we call @math -pooling. Let @math denote a classification model. @math denotes a local feature descriptor mapping from input image @math and location @math to a vector with length @math , which describes this region. @math is a pooling scheme which aggregates @math local features to a single global image description of length @math . In our case, @math and is compressed using @cite_12 . Finally, @math is a classifier. In a common CNN like VGG16, @math corresponds to the first part of a CNN up to the last convolutional layer, @math are two fully connected layers and @math is the final classifier. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2261271299"
],
"abstract": [
"Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition. However, bilinear features are high dimensional, typically on the order of hundreds of thousands to a few million, which makes them impractical for subsequent analysis. We propose two compact bilinear representations with the same discriminative power as the full bilinear representation but with only a few thousand dimensions. Our compact representations allow back-propagation of classification errors enabling an end-to-end optimization of the visual recognition system. The compact bilinear representations are derived through a novel kernelized analysis of bilinear pooling which provide insights into the discriminative power of bilinear pooling, and a platform for further research in compact pooling methods. Experimentation illustrate the utility of the proposed representations for image classification and few-shot learning across several datasets."
]
} |
1705.00487 | 2610510746 | Most recent CNN architectures use average pooling as a final feature encoding step. In the field of fine-grained recognition, however, recent global representations like bilinear pooling offer improved performance. In this paper, we generalize average and bilinear pooling to "alpha-pooling", allowing for learning the pooling strategy during training. In addition, we present a novel way to visualize decisions made by these approaches. We identify parts of training images having the highest influence on the prediction of a given test image. It allows for justifying decisions to users and also for analyzing the influence of semantic parts. For example, we can show that the higher capacity VGG16 model focuses much more on the bird's head than, e.g., the lower-capacity VGG-M model when recognizing fine-grained bird categories. Both contributions allow us to analyze the difference when moving between average and bilinear pooling. In addition, experiments show that our generalized approach can outperform both across a variety of standard datasets. | Average pooling is a common final feature encoding step in most state-of-the-art CNN architectures like ResNet @cite_23 or Inception @cite_17 . The combination @cite_13 of CNN feature maps and bilinear pooling @cite_7 @cite_3 is one of the current state-of-the-art approaches in the fine-grained area. Both approaches are a special case of @math -pooling. | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_23",
"@cite_13",
"@cite_17"
],
"mid": [
"2170653751",
"78159342",
"2949650786",
"",
"2274287116"
],
"abstract": [
"Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge"
]
} |
1705.00753 | 2610245951 | While end-to-end neural machine translation (NMT) has made remarkable progress recently, it still suffers from the data scarcity problem for low-resource language pairs and domains. In this paper, we propose a method for zero-resource NMT by assuming that parallel sentences have close probabilities of generating a sentence in a third language. Based on this assumption, our method is able to train a source-to-target NMT model ("student") without parallel corpora available, guided by an existing pivot-to-target NMT model ("teacher") on a source-pivot parallel corpus. Experimental results show that the proposed method significantly improves over a baseline pivot-based model by +3.0 BLEU points across various language pairs. | Training NMT models in a zero-resource scenario by leveraging other languages has attracted intensive attention in recent years. proposed an approach which delivers the multi-way, multilingual NMT model proposed by @cite_21 for zero-resource translation. They used the multi-way NMT model trained by other language pairs to generate a pseudo parallel corpus and fine-tuned the attention mechanism of the multi-way NMT model to enable zero-resource translation. Several authors proposed a universal encoder-decoder network in multilingual scenarios to perform zero-shot learning @cite_4 @cite_10 . This universal model extracts translation knowledge from multiple different languages, making zero-resource translation feasible without direct training. | {
"cite_N": [
"@cite_10",
"@cite_21",
"@cite_4"
],
"mid": [
"2555745756",
"",
"2550821151"
],
"abstract": [
"In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework under a unified approach. We are then able to employ attention-based NMT for many-to-many multilingual translation tasks. Our approach does not require any special treatment on the network architecture and it allows us to learn minimal number of free parameters in a standard way of training. Our approach has shown its effectiveness in an under-resourced translation scenario with considerable improvements up to 2.6 BLEU points. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.",
"",
"We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-the-art results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages."
]
} |
1705.00366 | 2610620097 | We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as "ambiguous" or "not ambiguous" to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid "ground truth" foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. | A related, yet distinct problem in modern computer vision literature is predicting segmentation , where difficulty is commonly defined by the extent to which algorithms can produce segmentations similar to the ground truth for a given image @cite_19 @cite_17 @cite_44 or the time a person takes to segment an image @cite_34 . However, what can be deemed a successful method for predicting segmentation difficulty may be a moving target", given the development of better algorithms and easier-to-use segmentation annotation systems. In contrast, since we aim to capture human-perceived ambiguity, our method to estimate ambiguity directly measures an intrinsic property about an image and so leads to a static yes" or no" outcome (rather than an evolving ranking based on the chosen algorithm or annotation system). | {
"cite_N": [
"@cite_44",
"@cite_19",
"@cite_34",
"@cite_17"
],
"mid": [
"",
"2006104434",
"2037820867",
"130387664"
],
"abstract": [
"",
"The mode of manual annotation used in an interactive segmentation algorithm affects both its accuracy and ease-of-use. For example, bounding boxes are fast to supply, yet may be too coarse to get good results on difficult images, freehand outlines are slower to supply and more specific, yet they may be overkill for simple images. Whereas existing methods assume a fixed form of input no matter the image, we propose to predict the tradeoff between accuracy and effort. Our approach learns whether a graph cuts segmentation will succeed if initialized with a given annotation mode, based on the image's visual separability and foreground uncertainty. Using these predictions, we optimize the mode of input requested on new images a user wants segmented. Whether given a single image that should be segmented as quickly as possible, or a batch of images that must be segmented within a specified time budget, we show how to select the easiest modality that will be sufficiently strong to yield high quality segmentations. Extensive results with real users and three datasets demonstrate the impact.",
"We present an active learning framework that predicts the tradeoff between the effort and information gain associated with a candidate image annotation, thereby ranking unlabeled and partially labeled images according to their expected \"net worth\" to an object recognition system. We develop a multi-label multiple-instance approach that accommodates realistic images containing multiple objects and allows the category-learner to strategically choose what annotations it receives from a mixture of strong and weak labels. Since the annotation cost can vary depending on an image's complexity, we show how to improve the active selection by directly predicting the time required to segment an unlabeled image. Our approach accounts for the fact that the optimal use of manual effort may call for a combination of labels at multiple levels of granularity, as well as accurate prediction of manual effort. As a result, it is possible to learn more accurate category models with a lower total expenditure of annotation effort. Given a small initial pool of labeled data, the proposed method actively improves the category models with minimal manual intervention.",
"The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of Probabilistic Boosting Classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice."
]
} |
1705.00366 | 2610620097 | We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as "ambiguous" or "not ambiguous" to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid "ground truth" foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. | Our work more closely relates to the pioneering segmentation collection work by @cite_23 , who collected multiple segmentations of natural scenes from independent annotators, motivated by the belief that segmentation tasks can afford multiple correct answers. Whereas gathered a fixed number of annotations for each image from known in-house annotators to provide a soft ground truth for image contours, we show both how to predict which images offer multiple interpretations for the foreground object segmentation problem and how to more economically collect redundant annotations from an anonymous on-line crowd. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2121927366"
],
"abstract": [
"This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties."
]
} |
1705.00366 | 2610620097 | We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as "ambiguous" or "not ambiguous" to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid "ground truth" foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. | Numerous systems already collect object segmentations from online crowds, including LabelMe @cite_21 and the MSCOCO crowdsourcing pipeline @cite_24 . These systems instruct the worker to segment as many objects as (s)he chooses @cite_21 or as many instances of a given object category (s)he observes @cite_24 . In both cases, the aim is to efficiently segment and name in a given multi-object scene image. In contrast, the goal of our system is to efficiently capture the diversity of human opinions on the for a given image. Consequently, as commonly done in human computation systems @cite_25 @cite_23 , we collect annotations from multiple, independent annotators to avoid biasing workers. However, in contrast to these human computation systems, we automatically predict how many independent annotators to recruit to efficiently complete the task. | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_25",
"@cite_23"
],
"mid": [
"2952122856",
"2110764733",
"1996326832",
"2121927366"
],
"abstract": [
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.",
"In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach.",
"This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties."
]
} |
1705.00366 | 2610620097 | We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as "ambiguous" or "not ambiguous" to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid "ground truth" foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. | Numerous systems have been proposed to assist blind people to take a high quality picture of an object with a mobile phone camera @cite_27 @cite_45 @cite_20 @cite_16 @cite_28 . Unfortunately, such systems assume a user can localize the desired object and only help the user to improve the image focus @cite_27 , lighting @cite_45 , or composition @cite_20 @cite_16 @cite_28 . Unlike prior work, we do not assume a user can localize the object of interest. Rather, we propose a method that can be employed to automatically alert a blind user whether an image shows a single, unambiguous object. We demonstrate the predictive advantage of our system for this task over relying on saliency-based methods @cite_2 @cite_8 . | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_27",
"@cite_45",
"@cite_2",
"@cite_16",
"@cite_20"
],
"mid": [
"2952588030",
"1981633181",
"",
"2058556535",
"2090463878",
"2022450171",
"1997141290"
],
"abstract": [
"We study the problem of Salient Object Subitizing, i.e. predicting the existence and the number of salient objects in an image using holistic cues. This task is inspired by the ability of people to quickly and accurately identify the number of items within the subitizing range (1-4). To this end, we present a salient object subitizing image dataset of about 14K everyday images which are annotated using an online crowdsourcing marketplace. We show that using an end-to-end trained Convolutional Neural Network (CNN) model, we achieve prediction accuracy comparable to human performance in identifying images with zero or one salient object. For images with multiple salient objects, our model also provides significantly better than chance performance without requiring any localization process. Moreover, we propose a method to improve the training of the CNN subitizing model by leveraging synthetic images. In experiments, we demonstrate the accuracy and generalizability of our CNN subitizing model and its applications in salient object detection and image retrieval.",
"Computer vision and human-powered services can provide blind people access to visual information in the world around them, but their efficacy is dependent on high-quality photo inputs. Blind people often have difficulty capturing the information necessary for these applications to work because they cannot see what they are taking a picture of. In this paper, we present Scan Search, a mobile application that offers a new way for blind people to take high-quality photos to support recognition tasks. To support realtime scanning of objects, we developed a key frame extraction algorithm that automatically retrieves high-quality frames from continuous camera video stream of mobile phones. Those key frames are streamed to a cloud-based recognition engine that identifies the most significant object inside the picture. This way, blind users can scan for objects of interest and hear potential results in real time. We also present a study exploring the tradeoffs in how many photos are sent, and conduct a user study with 8 blind participants that compares Scan Search with a standard photo-snapping interface. Our results show that Scan Search allows users to capture objects of interest more efficiently and is preferred by users to the standard interface.",
"",
"Visual information pervades our environment. Vision is used to decide everything from what we want to eat at a restaurant and which bus route to take to whether our clothes match and how long until the milk expires. Individually, the inability to interpret such visual information is a nuisance for blind people who often have effective, if inefficient, work-arounds to overcome them. Collectively, however, they can make blind people less independent. Specialized technology addresses some problems in this space, but automatic approaches cannot yet answer the vast majority of visual questions that blind people may have. VizWiz addresses this shortcoming by using the Internet connections and cameras on existing smartphones to connect blind people and their questions to remote paid workers' answers. VizWiz is designed to have low latency and low cost, making it both competitive with expensive automatic solutions and much more versatile.",
"Conventional saliency analysis methods measure the saliency of individual pixels. The resulting saliency map inevitably loses information in the original image and finding salient objects in it is difficult. We propose to detect salient objects by directly measuring the saliency of an image window in the original image and adopt the well established sliding window based object detection paradigm.",
"We propose an assisted photography framework to help visually impaired users properly aim a camera and evaluate our implementation in the context of documenting public transportation accessibility. Our framework integrates user interaction during the image capturing process to help users take better pictures in real time. We use an image composition model to evaluate picture quality and suggest providing audiovisual feedback to improve users’ aiming position. With our particular framework implementation, blind participants were able to take pictures of similar quality to those taken by low vision participants without assistance. Likewise, our system helped low vision participants take pictures as good as those taken by fully sighted users. Our results also show a positive trend in favor of spoken directions to assist visually impaired users in comparison to tone and silent feedback. Positive usefulness ratings provided by full vision users further suggest that assisted photography has universal appeal.",
"Blind people want to take photographs for the same reasons as others -- to record important events, to share experiences, and as an outlet for artistic expression. Furthermore, both automatic computer vision technology and human-powered services can be used to give blind people feedback on their environment, but to work their best these systems need high-quality photos as input. In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly."
]
} |
1705.00217 | 2611056830 | This work presents an unsupervised approach for improving WordNet that builds upon recent advances in document and sense representation via distributional semantics. We apply our methods to construct Wordnets in French and Russian, languages which both lack good manual constructions.1 These are evaluated on two new 600-word test sets for word-to-synset matching and found to improve greatly upon synset recall, outperforming the best automated Wordnets in F-score. Our methods require very few linguistic resources, thus being applicable for Wordnet construction in low-resources languages, and may further be applied to sense clustering and other Wordnet improvements. | There have also been recent vector approaches for Wordnet construction, specifically for an Arabic Wordnet and a Bengali Wordnet . The small size of these Wordnets (below 1000 synsets for high-F-score versions) underscores the difficulty of extracting sense information from unsupervised representations. In particular, we found that stronger sense-induction methods, specifically sparse coding, than those presented in @cite_1 were needed to distinguish word-senses well. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2611507145"
],
"abstract": [
"En traitement automatique des langues, les ressources lexico-semantiques ont ete incluses dans un grand nombre d'applications. La creation manuelle de telles ressources est consommatrice de temps humain et leur couverture limitee ne permet pas toujours de couvrir les besoins des applications. Ce probleme est encore plus important pour les langues moins dotees que le francais ou l'anglais. L'induction de sens presente dans ce cadre une piste interessante. A partir d'un corpus de texte, il s'agit d'inferer les sens possibles pour chacun des mots qui le composent. Nous etudions dans cet article une approche basee sur une representation vectorielle pour chaque occurrence d'un mot correspondant a ses voisins. A partir de cette representation, construite sur un corpus en bengali, nous comparons plusieurs approches de clustering (k-moyennes, clustering hierarchique et esperance-maximisation) des occurrences d'un mot pour determiner les differents sens qu'il peut prendre. Nous comparons nos resultats au Bangla WordNet ainsi qu'a une reference etablie pour l'occasion. Nous montrons que cette methode permet de trouver des sens qui ne se trouvent pas dans le Bangla WordNet."
]
} |
1705.00217 | 2611056830 | This work presents an unsupervised approach for improving WordNet that builds upon recent advances in document and sense representation via distributional semantics. We apply our methods to construct Wordnets in French and Russian, languages which both lack good manual constructions.1 These are evaluated on two new 600-word test sets for word-to-synset matching and found to improve greatly upon synset recall, outperforming the best automated Wordnets in F-score. Our methods require very few linguistic resources, thus being applicable for Wordnet construction in low-resources languages, and may further be applied to sense clustering and other Wordnet improvements. | Another approach is to leverage and expand upon existing resources. Two multi-lingual Wordnets thus constructed are the Extended Open Multilingual Wordnet (OMW) @cite_0 , which scraped Wiktionary, and the Universal Multilingual Wordnet (UWN) , which used multiple translations to match word-senses. Through evaluation we found that the approach leads to high precision low recall Wordnets. This method is also used for BabelNet, which extends Wordnet and Wikipedia . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2120699290"
],
"abstract": [
"We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multilingual semantic network. Key to our approach is the integration of lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition, Machine Translation is applied to enrich the resource with lexical information for all languages. We first conduct in vitro experiments on new and existing gold-standard datasets to show the high quality and coverage of BabelNet. We then show that our lexical resource can be used successfully to perform both monolingual and cross-lingual Word Sense Disambiguation: thanks to its wide lexical coverage and novel semantic relations, we are able to achieve state-of the-art results on three different SemEval evaluation tasks."
]
} |
1705.00424 | 2610630117 | Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods. | POS tagging has been studied for many years. Traditionally, probabilistic models are a popular choice, such as Hidden Markov Models (HMM) and Conditional Random Fields (CRF) @cite_1 . Recently, neural network models have been developed for POS tagging and achieved good performance, such as RNN and bidirectional long short-term memory (BiLSTM) and CRF-BiLSTM models @cite_13 @cite_17 . For example, the CRF-BiLSTM POS tagger obtained the state-of-the-art performance on Penn Treebank WSJ corpus @cite_17 . | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_17"
],
"mid": [
"",
"2147880316",
"1940872118"
],
"abstract": [
"",
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"In this paper, we propose a variety of Long Short-Term Memory (LSTM) based models for sequence tagging. These models include LSTM networks, bidirectional LSTM (BI-LSTM) networks, LSTM with a Conditional Random Field (CRF) layer (LSTM-CRF) and bidirectional LSTM with a CRF layer (BI-LSTM-CRF). Our work is the first to apply a bidirectional LSTM CRF (denoted as BI-LSTM-CRF) model to NLP benchmark sequence tagging data sets. We show that the BI-LSTM-CRF model can efficiently use both past and future input features thanks to a bidirectional LSTM component. It can also use sentence level tag information thanks to a CRF layer. The BI-LSTM-CRF model can produce state of the art (or close to) accuracy on POS, chunking and NER data sets. In addition, it is robust and has less dependence on word embedding as compared to previous observations."
]
} |
1705.00128 | 2611322782 | In most of modern enterprise systems, redundancy configuration is often considered to provide availability during the part of such systems is being patched. However, the redundancy may increase the attack surface of the system. In this paper, we model and assess the security and capacity oriented availability of multiple server redundancy designs when applying security patches to the servers. We construct (1) a graphical security model to evaluate the security under potential attacks before and after applying patches, (2) a stochastic reward net model to assess the capacity oriented availability of the system with a patch schedule. We present our approach based on case study and model-based evaluation for multiple design choices. The results show redundancy designs increase capacity oriented availability but decrease security when applying security patches. We define functions that compare values of security metrics and capacity oriented availability with the chosen upper lower bounds to find design choices that satisfy both security and availability requirements. | Roy @cite_8 proposed attack countermeasure trees (ACTs) for the qualitative and probabilistic security analysis by taking into account defense mechanisms on the nodes. They implemented the ACT in the SHARPE @cite_10 and showed the usability of their model in case study. Albanese @cite_20 used AGs to compute the minimum-cost network hardening solution. The experiments were carried out using synthetic attack graphs and the results validated the performance of their approach. Hong @cite_17 developed the multi-layered HARM and performed the scalability analysis compared with the single layer AGs in terms of model construction and evaluation. The simulation results demonstrated that the HARM is more scalable than the single layer AG. | {
"cite_N": [
"@cite_10",
"@cite_17",
"@cite_20",
"@cite_8"
],
"mid": [
"",
"2515955240",
"2150802402",
"2049973206"
],
"abstract": [
"",
"Security models, such as an attack graph (AG), are widely adopted to assess the security of networked systems, such as utilizing various security metrics and providing a cost-effective network hardening solution. There are various methods of generating these models, but the scalability problem exists for single-layered graph-based security models when analyzing all possible attack paths. To address this problem, we propose to use a multi-layer hierarchical attack representation model (HARM) that models various components in the networked system in different layers to reduce the computational complexity. First, we formulate key questions that need to be answered to assess the scalability of security models. Second, we formally define the multi-layer HARM. Last, we conduct experiments to show the scalability of security models. Our experimental results show that using the HARM can improve the scalability of assessing the security of the networked system significantly in comparison to the single-layered security models in various network scenarios.",
"Attack graph analysis has been established as a powerful tool for analyzing network vulnerability. However, previous approaches to network hardening look for exact solutions and thus do not scale. Further, hardening elements have been treated independently, which is inappropriate for real environments. For example, the cost for patching many systems may be nearly the same as for patching a single one. Or patching a vulnerability may have the same effect as blocking traffic with a firewall, while blocking a port may deny legitimate service. By failing to account for such hardening interdependencies, the resulting recommendations can be unrealistic and far from optimal. Instead, we formalize the notion of hardening strategy in terms of allowable actions, and define a cost model that takes into account the impact of interdependent hardening actions. We also introduce a near-optimal approximation algorithm that scales linearly with the size of the graphs, which we validate experimentally.",
"Attack tree (AT) is one of the widely used non-state-space models for security analysis. The basic formalism of AT does not take into account defense mechanisms. Defense trees (DTs) have been developed to investigate the effect of defense mechanisms using measures such as attack cost, security investment cost, return on attack (ROA), and return on investment (ROI). DT, however, places defense mechanisms only at the leaf nodes and the corresponding ROI ROA analysis does not incorporate the probabilities of attack. In attack response tree (ART), attack and response are both captured but ART suffers from the problem of state-space explosion, since solution of ART is obtained by means of a state-space model. In this paper, we present a novel attack tree paradigm called attack countermeasure tree (ACT) which avoids the generation and solution of a state-space model and takes into account attacks as well as countermeasures (in the form of detection and mitigation events). In ACT, detection and mitigation are allowed not just at the leaf node but also at the intermediate nodes while at the same time the state-space explosion problem is avoided in its analysis. We study the consequences of incorporating countermeasures in the ACT using three case studies (ACT for BGP attack, ACT for a SCADA attack and ACT for malicious insider attacks). Copyright © 2011 John Wiley & Sons, Ltd."
]
} |
1705.00128 | 2611322782 | In most of modern enterprise systems, redundancy configuration is often considered to provide availability during the part of such systems is being patched. However, the redundancy may increase the attack surface of the system. In this paper, we model and assess the security and capacity oriented availability of multiple server redundancy designs when applying security patches to the servers. We construct (1) a graphical security model to evaluate the security under potential attacks before and after applying patches, (2) a stochastic reward net model to assess the capacity oriented availability of the system with a patch schedule. We present our approach based on case study and model-based evaluation for multiple design choices. The results show redundancy designs increase capacity oriented availability but decrease security when applying security patches. We define functions that compare values of security metrics and capacity oriented availability with the chosen upper lower bounds to find design choices that satisfy both security and availability requirements. | Kim @cite_7 proposed a hierarchical approach to model the availability of the non-virtualized and virtualized systems using the fault tree for the system and the CTMC for the components. Trivedi @cite_3 presented several case studies on assessing the availability of real systems from Motorola, Cisco and Sun Microsystems via stochastic models. | {
"cite_N": [
"@cite_3",
"@cite_7"
],
"mid": [
"1857975650",
"2141740699"
],
"abstract": [
"Availability assessment is of paramount importance to guarantee uninterrupted operation of a variety of commercial-grade information and networked systems. In this paper, we present practical case studies that show how to use stochastic analytic modeling approaches to quantitatively assess the availability of such systems. We present non-state-space models, statespace models and hierarchical models. We describe the details of these modeling approaches to assess system availability. Copyright © 2012 John Wiley & Sons, Ltd.",
"This paper develops an availability model of a virtualized system. We construct non-virtualized and virtualized two hosts system models using a two-level hierarchical approach in which fault trees are used in the upper level and homogeneous continuous time Markov chains (CTMC) are used to represent sub-models in lower level. In the models, we incorporate not only hardware failures (e.g., CPU, memory, power, etc) but also software failures including Virtual Machine Monitor (VMM), Virtual Machine (VM), and application failures. We also incorporate high availability (HA) service and VM live migration in the virtualized system. Metrics we use are system steady state availability, downtime in minutes per year and capacity oriented availability."
]
} |
1705.00128 | 2611322782 | In most of modern enterprise systems, redundancy configuration is often considered to provide availability during the part of such systems is being patched. However, the redundancy may increase the attack surface of the system. In this paper, we model and assess the security and capacity oriented availability of multiple server redundancy designs when applying security patches to the servers. We construct (1) a graphical security model to evaluate the security under potential attacks before and after applying patches, (2) a stochastic reward net model to assess the capacity oriented availability of the system with a patch schedule. We present our approach based on case study and model-based evaluation for multiple design choices. The results show redundancy designs increase capacity oriented availability but decrease security when applying security patches. We define functions that compare values of security metrics and capacity oriented availability with the chosen upper lower bounds to find design choices that satisfy both security and availability requirements. | There are a few work focused on considering both security and dependability. Trivedi @cite_0 proposed a new classification of dependability and security models and showed case studies for both the individual and composite models. Wang @cite_14 developed an intrusion tolerant architecture for distributed servers based on fault tolerant computing techniques to mitigate the impact of both known and unknown attacks. Bangalore @cite_6 proposed the method of self-cleansing intrusion tolerance based on the virtualization technique by reducing the server's exposure time to less than a minute. The experiment results showed lower exposure time leads to slightly larger response time but yields higher security level. Yu @cite_18 evaluated the survivability (both statically and under sustained attacks) and costs of three virtual machine based architectures using the analytical methods. Ramasamy @cite_5 used combinatorial modeling to analyze the impact of virtualization on the single physical node based on the assumption that module failures are independent. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_6",
"@cite_0",
"@cite_5"
],
"mid": [
"1516801632",
"2137650896",
"2131717037",
"1976009916",
""
],
"abstract": [
"Virtual machine based services are becoming predominant in data centers or cloud computing since virtual machines can provide strong isolation and better monitoring for security purposes. While there are many promising security techniques based on virtual machines, it is not clear how significant the difference between various system architectures can be in term of survivability. In this paper, we analyze the survivability of three virtual machine based architectures — load balancing architecture, isolated service architecture, and BFT architecture. Both the survivability based on the availability and the survivability under sustained attacks for each architecture are analyzed. Furthermore, the costs of each architecture are compared. The results show that even if the same set of commercial off the shell (COTS) software are used, the performance of various service architectures are largely different in surviving attacks. Our results can be used as guidelines in the service architecture design when survivability to attacks is important.",
"This paper presents a intrusion tolerant architecture for distributed services, especially COTS servers. An intrusion tolerant system assumes that attacks will happen, and some will be successful. However, a wide range of mission critical applications need to provide continuous service despite active attacks or partial compromise. The proposed architecture emphasizes on continuity of operation. It strives to mitigate the effects of both known and unknown attack. We make use techniques of fault tolerant computing, specifically redundancy, diversity, acceptance test, textitvoting—, as well as adaptive reconfiguration. Our architecture consists of five functional components that work together to extend the fault tolerance capability of COTS servers. In addition, the architecture provides mechanisms to audit the COTS servers and internal components for signs of compromise. The auditing as well as adaptive reconfiguration components evaluate the environment threats, identify potential sources of compromise and adaptively generate new configurations for the system.",
"The number of malware attacks is increasing, Companies have invested millions of dollars in intrusion detection and intrusion prevention (ID IP) technologies and products, yet many web servers are hacked every year. The current reactive methods of security have proven to be inadequate because the “bad guys” are always one step ahead of the Intrusion Detection Intrusion Prevention community. Our research seeks to prove the feasibility of a completely new and innovative theory of server security called “Self-Cleansing Intrusion Tolerance” (SCIT). SCIT shifts the focus from detection and prevention to containing losses. SCIT uses virtualization technology in a new and unique way to make it more difficult for attackers to do damage acquire data by reducing a server’s exposure time from several months to less than a minute. In this way we increase the dependability of the server and provide a new way to balance the trade-off between security and availability. We have applied SCIT to multiple types of servers (DNS, SSO and Web), in this paper we will focus on securing web servers using SCIT. Based on the results of load testing of a web application for various load scenarios under both scit and non-scit environments, we will clearly show that SCIT provides a high degree of security with little degradation in overall response time of the application.",
"There is a need to quantify system properties methodically. Dependability and security models have evolved nearly independently. Therefore, it is crucial to develop a classification of dependability and security models which can meet the requirement of professionals in both fault-tolerant computing and security community. In this paper, we present a new classification of dependability and security models. First we present the classification of threats and mitigations in systems and networks. And then we present several individual model types such as availability, confidentiality, integrity, performance, reliability, survivability, safety and maintainability. Finally we show that each model type can be combined and represented by one of the model representation techniques: combinatorial (such as reliability block diagrams (RBD), reliability graphs, fault trees, attack trees), state-space (continuous time Markov chains, stochastic Petri nets, fluid stochastic Petri nets, etc) and hierarchical (e.g., fault trees in the upper level and Markov chains in the lower level). We show case studies for each individual model types as well as composite model types.",
""
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Assortativity. Newman @cite_1 introduced (graph) assortativity which is denoted @math . Van Meighem @cite_14 showed perfect assortativity ( @math ) is only possible in regular graphs, while any complete bipartite graph @math ( @math ) is perfectly disassortative ( @math ). There is a large literature on network degree correlations and assortativity (e.g., @cite_6 ), and on graphs with extremal assortativity within a class (e.g., @cite_12 ). | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_12",
"@cite_6"
],
"mid": [
"2154177180",
"2040956707",
"2338515455",
"1803765581"
],
"abstract": [
"Newman’s measure for (dis)assortativity, the linear degree correlation coefficient ?D, is reformulated in terms of the total number Nk of walks in the graph with k hops. This reformulation allows us to derive a new formula from which a degree-preserving rewiring algorithm is deduced, that, in each rewiring step, either increases or decreases ?D conform our desired objective. Spectral metrics (eigenvalues of graph-related matrices), especially, the largest eigenvalue ?1 of the adjacency matrix and the algebraic connectivity ?N?1 (second-smallest eigenvalue of the Laplacian) are powerful characterizers of dynamic processes on networks such as virus spreading and synchronization processes. We present various lower bounds for the largest eigenvalue ?1 of the adjacency matrix and we show, apart from some classes of graphs such as regular graphs or bipartite graphs, that the lower bounds for ?1 increase with ?D. A new upper bound for the algebraic connectivity ?N?1 decreases with ?D. Applying the degree-preserving rewiring algorithm to various real-world networks illustrates that (a) assortative degree-preserving rewiring increases ?1, but decreases ?N?1, even leading to disconnectivity of the networks in many disjoint clusters and that (b) disassortative degree-preserving rewiring decreases ?1, but increases the algebraic connectivity, at least in the initial rewirings.",
"A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.",
"",
"Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs."
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Joint Degree Matrix (JDM). The generation of random graphs with a particular JDM (also called a 2K-series) has been the subject of a number of recent papers. Stanton @cite_19 and Orsini @cite_6 have proposed random edge rewiring as a method of sampling graphs with a given JDM, while Gjorka @cite_22 has introduced a random wiring method for constructing these graphs. However, there is no means known to us by which JDMs may be efficiently enumerated, and therefore there is no easy means to maximize assortativity, which is a statistic of the JDM, short of enumerating all (in our case, simple and connected) graphs with a given degree sequence. | {
"cite_N": [
"@cite_19",
"@cite_22",
"@cite_6"
],
"mid": [
"2025598851",
"1564631807",
"1803765581"
],
"abstract": [
"One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real-life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, that is, the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this article, and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via endpoint switches. We empirically evaluate the mixing time of this Markov chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov chain mixes quickly on these real graphs, allowing for utilization of our techniques in practice.",
"In networking research, it is often desirable to generate synthetic graphs with certain properties. In this paper, we present a new algorithm, 2K_Simple, for exact construction of simple graphs with a target joint degree matrix (JDM). We prove that the algorithm constructs exactly the target JDM and that its running time is linear in the number of edges. Furthermore, we show that the algorithm poses less constraints on the graph structure than previous state-of-the-art construction algorithms. We exploit this flexibility to extend 2K_Simple and design two algorithms that achieve additional network properties on top of the exact target JDM. In particular, 2K_Simple_Clustering produces simple graphs with a target JDM and average clustering coefficient close to a target, while 2K_Simple_Attributes produces exactly simple graphs with a target JDM and joint occurrence of node attribute pairs. We exhaustively evaluate our algorithms through simulation for small graphs, and we also demonstrate their benefits in generating graphs that resemble real-world social networks in terms of accuracy and speed; we reduce the running time by orders of magnitudes compared to previous approaches that rely on Monte Carlo Markov Chains.",
"Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs."
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Rewiring. The meta-graph for a degree sequence, with a vertex for each connected simple graph with that degree sequence and an edge connecting graphs related by rewiring a pair of edges, was studied by Taylor @cite_17 ; in particular, he showed this meta-graph to be connected (Thm. 3.3) extending an earlier result by Rysler for simple graphs @cite_9 . This fact is used in rewiring . | {
"cite_N": [
"@cite_9",
"@cite_17"
],
"mid": [
"2021903957",
"155825010"
],
"abstract": [
"This paper is concerned with a matrix A of m rows and n columns, all of whose entries are 0’s and 1’s. Let the sum of row i of A be denoted by r i (i = 1, ... , m) and let the sum of column i of A be denoted by S i (i = 1, ... ,n).",
"This paper investigates realizations of a given degree sequence, and the way in which they are related by switchings. The results are given in the context of simple graphs, multigraphs and pseudographs. We show that we can transform any connected graph to any other connected graph of the same degree sequence, by switchings wich are constrained to connected graphs. This is done for certain labelled graphs, the result for unlabelled graphs following as a corollary. The results are then extended to infinite degree sequences."
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Following Rysler's work, rewiring heuristics for sampling graphs with a particular degree sequence (e.g., @cite_21 , @cite_18 , @cite_6 ) have been introduced. Rewiring heuristics have also been proposed by Newman @cite_16 , Xuli-Burnet @cite_2 , Van Meighem @cite_14 , and Winterbach @cite_4 along others for changing a graph's assortativity. The first three of these algorithms, being purely stochastic, cannot efficiently maximize assortativity. Winterbach's algorithm uses a guided rewiring technique to maximize assortativity. However, this technique does not maintain graph connectivity, as its rewirings are a subset of those explored by rewiring heuristic @math (see grh ), and therefore Winterbach's algorithm does not necessarily maximize assortativity. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_21",
"@cite_6",
"@cite_2",
"@cite_16"
],
"mid": [
"2156737316",
"2154177180",
"2140214099",
"1988307095",
"1803765581",
"1652469167",
"2033193852"
],
"abstract": [
"Molecular networks guide the biochemistry of a living cell on multiple levels: Its metabolic and signaling pathways are shaped by the network of interacting proteins, whose production, in turn, is controlled by the genetic regulatory network. To address topological properties of these two networks, we quantified correlations between connectivities of interacting nodes and compared them to a null model of a network, in which all links were randomly rewired. We found that for both interaction and regulatory networks, links between highly connected proteins are systematically suppressed, whereas those between a highly connected and low-connected pairs of proteins are favored. This effect decreases the likelihood of cross talk between different functional modules of the cell and increases the overall robustness of a network by localizing effects of deleterious perturbations.",
"Newman’s measure for (dis)assortativity, the linear degree correlation coefficient ?D, is reformulated in terms of the total number Nk of walks in the graph with k hops. This reformulation allows us to derive a new formula from which a degree-preserving rewiring algorithm is deduced, that, in each rewiring step, either increases or decreases ?D conform our desired objective. Spectral metrics (eigenvalues of graph-related matrices), especially, the largest eigenvalue ?1 of the adjacency matrix and the algebraic connectivity ?N?1 (second-smallest eigenvalue of the Laplacian) are powerful characterizers of dynamic processes on networks such as virus spreading and synchronization processes. We present various lower bounds for the largest eigenvalue ?1 of the adjacency matrix and we show, apart from some classes of graphs such as regular graphs or bipartite graphs, that the lower bounds for ?1 increase with ?D. A new upper bound for the algebraic connectivity ?N?1 decreases with ?D. Applying the degree-preserving rewiring algorithm to various real-world networks illustrates that (a) assortative degree-preserving rewiring increases ?1, but decreases ?N?1, even leading to disconnectivity of the networks in many disjoint clusters and that (b) disassortative degree-preserving rewiring decreases ?1, but increases the algebraic connectivity, at least in the initial rewirings.",
"We consider algorithms for generating networks that are extremal with respect to degree assortativity. Networks with maximized and minimized assortativities have been studied by other authors. In these cases, networks are rewired whilst maintaining their degree vectors. Although rewiring can be used to create networks with high or low assortativities, it is not known how close the results are to the true maximum or minimum assortativities achievable by networks with the same degree vectors. We introduce the first algorithm for computing a network with maximal or minimal assortativity on a given vector of valid node degrees. We compare the assortativity metrics of networks obtained by this algorithm to assortativity metrics of networks obtained by a greedy assortativity-maximization algorithm. The algorithms are applied to Erdős-Renyi networks, Barabasi-Albert and a sample of real-world networks. We find that the number of rewirings considered by the greedy approach must scale with the number of links in order to ensure a good approximation.",
"We consider two problems: randomly generating labeled bipartite graphs with a given degree sequence and randomly generating labeled tournaments with a given score sequence. We analyze simple Markov chains for both problems. For the first problem, we cannot prove that our chain is rapidly mixing in general, but in the near-regular case, i.e., when all the degrees are almost equal, we give a proof of rapid mixing. Our methods also apply to the corresponding problem for general (nonbipartite) regular graphs, which was studied earlier by several researchers. One significant difference in our approach is that our chain has one state for every graph (or bipartite graph) with the given degree sequence; in particular, there are no auxiliary states as in the chain used by Jerrum and Sinclair. For the problem of generating tournaments, we are able to prove that our Markov chain on tournaments is rapidly mixing, if the score sequence is near-regular. The proof techniques we use for the two problems are similar. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 14: 293–308, 1999",
"Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.",
"",
"We study assortative mixing in networks, the tendency for vertices in networks to be connected to other vertices that are like (or unlike) them in some way. We consider mixing according to discrete characteristics such as language or race in social networks and scalar characteristics such as age. As a special example of the latter we consider mixing according to vertex degree, i.e., according to the number of connections vertices have to other vertices: do gregarious people tend to associate with other gregarious people? We propose a number of measures of assortative mixing appropriate to the various mixing types, and apply them to a variety of real-world networks, showing that assortative mixing is a pervasive phenomenon found in many networks. We also propose several models of assortatively mixed networks, both analytic ones based on generating function methods, and numerical ones based on Monte Carlo graph generation techniques. We use these models to probe the properties of networks as their level of assortativity is varied. In the particular case of mixing by degree, we find strong variation with assortativity in the connectivity of the network and in the resilience of the network to the removal of vertices."
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Wiring. Li and Alderson @cite_15 introduced a greedy wiring heuristic for constructing a graph with maximum assortativity over the set of simple connected graphs with a target degree sequence. Kincaid @cite_12 argues wiring a minimally or maximally assortative connected simple graph is NP-hard and proposes a heuristic which is shown numerically to perform near optimally in minimizing graph assortativity. Winterbach @cite_4 , Zhou @cite_8 , and Meghanathan @cite_7 have also proposed methods unconstrained by graph connectivity of wiring maximally assortative graphs. This paper examines Li's heuristic further in wiring . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_15",
"@cite_12"
],
"mid": [
"2140214099",
"2184540354",
"2000092045",
"1987236914",
"2338515455"
],
"abstract": [
"We consider algorithms for generating networks that are extremal with respect to degree assortativity. Networks with maximized and minimized assortativities have been studied by other authors. In these cases, networks are rewired whilst maintaining their degree vectors. Although rewiring can be used to create networks with high or low assortativities, it is not known how close the results are to the true maximum or minimum assortativities achievable by networks with the same degree vectors. We introduce the first algorithm for computing a network with maximal or minimal assortativity on a given vector of valid node degrees. We compare the assortativity metrics of networks obtained by this algorithm to assortativity metrics of networks obtained by a greedy assortativity-maximization algorithm. The algorithms are applied to Erdős-Renyi networks, Barabasi-Albert and a sample of real-world networks. We find that the number of rewirings considered by the greedy approach must scale with the number of links in order to ensure a good approximation.",
"We define the problem of maximal assortativity matching (MAM) for a complex network graph as the problem of maximizing the similarity of the end vertices (with respect to some measure of node weight) constituting the matching. In this pursuit, we introduce a metric called the assortativity weight of an edge, defined as the product of the number of uncovered adjacent edges and the absolute value of the difference in the weights of the end vertices. The MAM algorithm prefers to include edges that have the smallest assortativity weight in each iteration (one edge per iteration) until all edges are covered. The MAM algorithm can also be adapted to determine a maximal dissortative matching (MDM) to maximize the dissimilarity between the end vertices that are part of a matching as well as to determine a maximal node matching (MNM) that simply maximizes the number of vertices that are part of the matching. We run the MAM, MNM and MDM algorithms on real-world network graphs as well as on the theoretical model-based random network graphs and scale-free network graphs and analyze the tradeoffs between the of node matches and assortativity index (targeted optimal values: 1 for MAM and -1 for MDM).",
"Recently, the assortative mixing of complex networks has received much attention partly because of its significance in various social networks. In this paper, a new scheme to generate an assortative growth network with given degree distribution is presented using a Monte Carlo sampling method. Since the degrees of a great number of real-life networks obey either power-law or Poisson distribution, we employ these two distributions to grow our models. The models generated by this method exhibit interesting characteristics such as high average path length, high clustering coefficient and strong rich-club effects.",
"There is a large, popular, and growing literature on \"scale-free\" networks with the Internet along with metabolic networks representing perhaps the canonical examples. While this has in many ways reinvigorated graph theory, there is unfortunately no consistent, precise definition of scale-free graphs and few rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and that the most celebrated claims regarding the Internet and biology are verifiably false. In this paper, we introduce a structural metric that allows us to differentiate between all simple, connected graphs having an identical degree sequence, which is of particular interest when that sequence satisfies a power law relationship. We demonstrate that the proposed structural metric yields considerable insight into the claimed properties of SF graphs and provides one possible measure of the extent to which a graph is scale-free. This structural view can be related t...",
""
]
} |
1705.00382 | 2964125296 | Abstract We examine two greedy heuristics — wiring and rewiring — for constructing maximum assortative graphs over all simple connected graphs with a target degree sequence. Counterexamples show that natural greedy rewiring heuristics do not necessarily return a maximum assortative graph, even though it is known that the meta-graph of all simple connected graphs with given degree is connected under rewiring. Counterexamples show an elegant greedy graph wiring heuristic from the literature may fail to achieve the target degree sequence or may fail to wire a maximally assortative graph. | Graph enumeration and generation. The results in this paper where achieved using geng , a tool in the nauty package created by McKay @cite_0 , to generate all simple connected graphs of a given order. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1990600049"
],
"abstract": [
"We report the current state of the graph isomorphism problem from the practical point of view. After describing the general principles of the refinement-individualization paradigm and pro ving its validity, we explain how it is implemented in several of the key implementations. In particular, we bring the description of the best known program nauty up to date and describe an innovative approach called Traces that outperforms the competitors for many difficult graph classes. Detailed comparisons against saucy, Bliss and conauto are presented."
]
} |
1705.00451 | 2610794603 | It has been well recognized that detecting drivable area is central to self-driving cars. Most of existing methods attempt to locate road surface by using lane line, thereby restricting to drivable area on which have a clear lane mark. This paper proposes an unsupervised approach for detecting drivable area utilizing both image data from a monocular camera and point cloud data from a 3D-LIDAR scanner. Our approach locates initial drivable areas based on a "direction ray map" obtained by image-LIDAR data fusion. Besides, a fusion of the feature level is also applied for more robust performance. Once the initial drivable areas are described by different features, the feature fusion problem is formulated as a Markov network and a belief propagation algorithm is developed to perform the model inference. Our approach is unsupervised and avoids common hypothesis, yet gets state-of-the-art results on ROAD-KITTI benchmark. Experiments show that our unsupervised approach is efficient and robust for detecting drivable area for self-driving cars. | Reliably detecting the road areas is a key requirement in self-driving cars. In resent years, many approaches have been proposed to address this challenge. The approaches mainly differ from each other based on the type of sensors used to get data, such as, monocular camera @cite_20 , binocular camera @cite_2 , LIDAR @cite_8 and the fusion of multi-sensor @cite_14 . | {
"cite_N": [
"@cite_8",
"@cite_14",
"@cite_20",
"@cite_2"
],
"mid": [
"2537159558",
"1569925765",
"2070708339",
"161578567"
],
"abstract": [
"Obtaining a comprehensive model of large and complex ground typically is crucial for autonomous driving both in urban and countryside environments. This paper presents an improved ground segmentation method for 3D LIDAR point clouds. Our approach builds on a polar grid map, which is divided into some sectors, then 1D Gaussian process (GP) regression model and Incremental Sample Consensus (INSAC) algorithm is used to extract ground for every sector. Experiments are carried out at the autonomous vehicle in different outdoor scenes, and results are compared to those of the existing method. We show that our method can get more promising performance.",
"In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.",
"In order to support driver assistance systems in unconstrained environments, we propose to extend local appearance-based road classification with a spatial feature generation and classification. Therefore, a hierarchical approach consisting of multiple low level base classifiers, the novel spatial feature generation, as well as a final road terrain classification, is used. The system perceives a variety of local properties of the environment by means of base classifiers operating on patches extracted from monocular camera images, each represented in a metric confidence map. The core of the proposed approach is the computation of spatial ray features (SPRAY) from these confidence maps. With this, the road-terrain classifier can decide based on local visual properties and their spatial layout in the scene. In order to show the feasibility of the approach, the extraction and evaluation of the metric ego-lane driving corridor on an inner city stream is demonstrated. This is a challenging task because on a local appearance level, ego-lane is not distinguishable from other asphalt parts on the road. However, by incorporating the proposed SPRAY features the distinction is possible without requiring an explicit lane model. Due to the parallel structure of this bottom-up approach, the implemented system operates in real-time with approximately 25 Hz on a GPU.",
"The computation of free space available in an environment is an essential task for many intelligent automotive and robotic applications. This paper proposes a new approach, which builds a stochastic occupancy grid to address the free space problem as a dynamic programming task. Stereo measurements are integrated over time reducing disparity uncertainty. These integrated measurements are entered into an occupancy grid, taking into account the noise properties of the measurements. In order to cope with real-time requirements of the application, three occupancy grid types are proposed. Their applicabilities and implementations are also discussed. Experimental results with real stereo sequences show the robustness and accuracy of the method. The current implementation of the method runs on off-the-shelf hardware at 20 Hz."
]
} |
1705.00451 | 2610794603 | It has been well recognized that detecting drivable area is central to self-driving cars. Most of existing methods attempt to locate road surface by using lane line, thereby restricting to drivable area on which have a clear lane mark. This paper proposes an unsupervised approach for detecting drivable area utilizing both image data from a monocular camera and point cloud data from a 3D-LIDAR scanner. Our approach locates initial drivable areas based on a "direction ray map" obtained by image-LIDAR data fusion. Besides, a fusion of the feature level is also applied for more robust performance. Once the initial drivable areas are described by different features, the feature fusion problem is formulated as a Markov network and a belief propagation algorithm is developed to perform the model inference. Our approach is unsupervised and avoids common hypothesis, yet gets state-of-the-art results on ROAD-KITTI benchmark. Experiments show that our unsupervised approach is efficient and robust for detecting drivable area for self-driving cars. | For monocular camera based approaches, most road detection algorithms use cues such as color @cite_0 and lane markings @cite_21 . To cope with illumination varieties and shadows, different color spaces have been introduced @cite_10 @cite_19 @cite_1 . Besides, leveraging deep learning, monocular vision based methods can achieve unprecedented results @cite_3 @cite_5 @cite_17 . However, unlike other vision conceptions, such as cat or dog, the conception of a road cannot be defined by appearance alone. A region that is regarded as a road depends more on its physical attributes. Therefore, approaches only relying on monocular vision are not robust enough for real applications. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"",
"360623563",
"1518058488",
"2168519618",
"33116912",
"2104497863",
""
],
"abstract": [
"",
"",
"We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"The main aim of this work is the development of a vision-based road detection system fast enough to cope with the difficult real-time constraints imposed by moving vehicle applications. The hardware platform, a special-purpose massively parallel system, has been chosen to minimize system production and operational costs. This paper presents a novel approach to expectation-driven low-level image segmentation, which can be mapped naturally onto mesh-connected massively parallel Simd architectures capable of handling hierarchical data structures. The input image is assumed to contain a distorted version of a given template; a multiresolution stretching process is used to reshape the original template in accordance with the acquired image content, minimizing a potential function. The distorted template is the process output.",
"By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.",
"Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7 compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8 compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined.",
"Color segmentation has been widely applied in many computer vision systems nowadays. In one example of computer vision systems, the real - time automatic road sign detection and recognition system has mostly applied the RGB (i.e. Red, Green and Blue) and HSI (i.e. Hue, Saturation and Intensity) color model to segment road signs from images that were captured by the censoring device. Thus this paper aims to review the background of the real - time road sign detection based on the color; show the performance of RGB and HSI in identifying color of the road signs under different lighting conditions in a real - time video images through the proposed experiments and finally studies the performance of both RGB and HSI from the experiments for application such as automatic road sign detection system.",
""
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | The problem of finding MDS remains NP-hard also for several very restricted graph classes. For example, NP-hardness of finding MDS for grids is known, the proof is attributed to Leighton @cite_25 . For unit disk graphs, MDS problem is also NP-hard @cite_48 . Additionally, Chleb ' i k and Chleb ' i kov ' a have shown that for bipartite graphs with maximum degree @math and general graphs with maximum degree @math , it is NP-hard to approximate MDS within ratios @math and @math , respectively @cite_29 . | {
"cite_N": [
"@cite_48",
"@cite_29",
"@cite_25"
],
"mid": [
"",
"2094067618",
"2063572899"
],
"abstract": [
"",
"We study approximation hardness of the Minimum Dominating Set problem and its variants in undirected and directed graphs. Using a similar result obtained by Trevisan for Minimum Set Cover we prove the first explicit approximation lower bounds for various kinds of domination problems (connected, total, independent) in bounded degree graphs. Asymptotically, for degree bound approaching infinity, these bounds almost match the known upper bounds. The results are applied to improve the lower bounds for other related problems such as Maximum Induced Matching and Maximum Leaf Spanning Tree.",
"Unit disk graphs are the intersection graphs of equal sized circles in the plane: they provide a graph-theoretic model for broadcast networks (cellular networks) and for some problems in computational geometry. We show that many standard graph theoretic problems remain NP-complete on unit disk graphs, including coloring, independent set, domination, independent domination, and connected domination; NP-completeness for the domination problem is shown to hold even for grid graphs, a subclass of unit disk graphs. In contrast, we give a polynomial time algorithm for finding cliques when the geometric representation (circles in the plane) is provided."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | Regarding the hardness of approximate MDS, approximation ratio @math , where @math is the maximum degree of a vertex, can be achieved by the greedy approximation algorithm. However, Feige showed that the logarithmic approximation is the best possible, unless the class NP contains some slightly superpolynomial algorithms @cite_34 . Recently, it has also been shown that MDS is hard to approximate within a better than logarithmic ratio for certain graphs with power law degree distribution @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_34"
],
"mid": [
"2964161862",
"2143996311"
],
"abstract": [
"We prove the first logarithmic lower bounds for the approximability of the Minimum Dominating Set problem for the case of connected ( α , β ) -power law graphs for α being a size parameter and β the power law exponent. We give also a best up to now upper approximation bound for this problem in the case of the parameters β 2 . We develop also a new functional method for proving lower approximation bounds and display a sharp approximation phase transition area between approximability and inapproximability of the underlying problems. Our results depend on a method which could be also of independent interest.",
"Given a collection F of subsets of S = 1,…, n , setcover is the problem of selecting as few as possiblesubsets from F such that their union covers S, , and maxk-cover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems areNP-hard. We prove that (1 - o (1)) ln n is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms. This closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o (1)) lnn), and provious results of Lund and Yanakakis, that showed hardness ofapproximation within a ratio of log 2 n 2s0.72 ln n . For max k -cover, we show an approximationthreshold of (1 - 1 e )(up tolow-order terms), under assumption that P≠NP ."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | Exact algorithms require exponential time to solve the problem. To the best of our knowledge, the currently best exact algorithm was proposed by , and finds MDS in @math time @cite_37 . | {
"cite_N": [
"@cite_37"
],
"mid": [
"2089718638"
],
"abstract": [
"For more than 40 years, Branch & Reduce exponential-time backtracking algorithms have been among the most common tools used for finding exact solutions of NP-hard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worst-case running time bounds. Motivated by this, we use an approach, that we call “Measure & Conquer”, as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worst-case time analysis. In order to show the potentialities of Measure & Conquer, we consider two well-studied NP-hard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis). Our examples show that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | It is known that the greedy approximation algorithm for vertex cover by Chv ' a tal @cite_46 can be used to find small dominating sets in polynomial time. This algorithm achieves approximation ratio @math , where @math is the maximum degree of a vertex and @math is the @math -th harmonic number. As we indicated above, @math . The application of the greedy approximation algorithm to find small dominating sets works as follows. For simplicity, we will say that a vertex @math is if @math . In the algorithm, vertices are ordered based on a value @math , which denotes the . In each iteration, vertex @math with the largest @math in partial dominating set @math is taken and put into @math . The algorithm terminates when @math is a dominating set. | {
"cite_N": [
"@cite_46"
],
"mid": [
"2157054705"
],
"abstract": [
"Let A be a binary matrix of size m × n, let cT be a positive row vector of length n and let e be the column vector, all of whose m components are ones. The set-covering problem is to minimize cTx subject to Ax ≥ e and x binary. We compare the value of the objective function at a feasible solution found by a simple greedy heuristic to the true optimum. It turns out that the ratio between the two grows at most logarithmically in the largest column sum of A. When all the components of cT are the same, our result reduces to a theorem established previously by Johnson and Lovasz."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | In contrast to weighted and connected dominating set problems, greedy and distributed @cite_31 approximation algorithms are still among the most popular in MDS. An experimental study of heuristics for MDS was conducted by Sanchis @cite_45 . Performance of algorithms for MDS in real-world networks has been compared by Neh ' e @cite_40 . There has been a recent surge in heuristic algorithms for MWDS @cite_30 @cite_21 @cite_12 . These algorithms are usually applied to classical benchmarks consisting of graphs inspired by the applications in wireless networks, with up to around @math vertices and with strong results in terms of numerical performance. However, the focus on scalability in the current experimental research on MDS is still somewhat limited. | {
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_40",
"@cite_45",
"@cite_31",
"@cite_12"
],
"mid": [
"2191728435",
"2591829782",
"2293431589",
"2084507343",
"2127622665",
"2320875380"
],
"abstract": [
"Abstract Iterated greedy algorithms belong to the class of stochastic local search methods. They are based on the simple and effective principle of generating a sequence of solutions by iterating over a constructive greedy heuristic using destruction and construction phases. This paper, first, presents an efficient randomized iterated greedy approach for the minimum weight dominating set problem, where—given a vertex-weighted graph—the goal is to identify a subset of the graphs’ vertices with minimum total weight such that each vertex of the graph is either in the subset or has a neighbor in the subset. Our proposed approach works on a population of solutions rather than on a single one. Moreover, it is based on a fast randomized construction procedure making use of two different greedy heuristics. Secondly, we present a hybrid algorithmic model in which the proposed iterated greedy algorithm is combined with the mathematical programming solver CPLEX. In particular, we improve the best solution provided by the iterated greedy algorithm with the solution polishing feature of CPLEX. The simulation results obtained on a widely used set of benchmark instances shows that our proposed algorithms outperform current state-of-the-art approaches.",
"The Minimum Weight Dominating Set (MWDS) problem is an important generalization of the Minimum Dominating Set (MDS) problem with extensive applications. This paper proposes a new local search algorithm for the MWDS problem, which is based on two new ideas. The first idea is a heuristic called two-level configuration checking (CC2), which is a new variant of a recent powerful configuration checking strategy (CC) for effectively avoiding the recent search paths. The second idea is a novel scoring function based on the frequency of being uncovered of vertices. Our algorithm is called CC2FS, according to the names of the two ideas. The experimental results show that, CC2FS performs much better than some state-of-the-art algorithms in terms of solution quality on a broad range of MWDS benchmarks.",
"Computation of the minimum dominating set problem is a classical discrete optimization problem in graph theory. Recently, it has found application in the network controlling theory. In this paper, there are compared three approaches of small dominating set computation. The first one is based on the integer linear programming. The second one is the combined hill-climbing algorithm which is based on the randomization and binary searching. Both are compared with a simply greedy algorithm. The experiments were conducted for three different graph classes. The integer linear programming achieved the best performance for the large real-world network's analysis.",
"We say a vertex v in a graph Gcovers a vertex w if v=w or if v and w are adjacent. A subset of vertices of G is a dominating set if it collectively covers all vertices in the graph. The dominating set problem, which is NP-hard, consists of finding a smallest possible dominating set for a graph. The straightforward greedy strategy for finding a small dominating set in a graph consists of successively choosing vertices which cover the largest possible number of previously uncovered vertices. Several variations on this greedy heuristic are described and the results of extensive testing of these variations is presented. A more sophisticated procedure for choosing vertices, which takes into account the number of ways in which an uncovered vertex may be covered, appears to be the most successful of the algorithms which are analyzed. For our experimental testing, we used both random graphs and graphs constructed by test case generators which produce graphs with a given density and a specified size for the smallest dominating set. We found that these generators were able to produce challenging graphs for the algorithms, thus helping to discriminate among them, and allowing a greater variety of graphs to be used in the experiments.",
"Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.",
"The minimum weight-dominating set (MWDS) problem is NP-hard and has a lot of applications in the real world. Several metaheuristic methods have been developed for solving the problem effectively, but suffering from high CPU time on large-scale instances. In this paper, we design an effective hybrid memetic algorithm (HMA) for the MWDS problem. First, the MWDS problem is formulated as a constrained 0–1 programming problem and is converted to an equivalent unconstrained 0–1 problem using an adaptive penalty function. Then, we develop a memetic algorithm for the resulting problem, which contains a greedy randomized adaptive construction procedure, a tabu local search procedure, a crossover operator, a population-updating method, and a path-relinking procedure. These strategies make a good tradeoff between intensification and diversification. A number of experiments were carried out on three types of instances from the literature. Compared with existing algorithms, HMA is able to find high-quality solutions in much less CPU time. Specifically, HMA is at least six times faster than existing algorithms on the tested instances. With increasing instance size, the CPU time required by HMA increases much more slowly than required by existing algorithms."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | Hedar and Ismail @cite_10 proposed a hybrid of a genetic algorithm and local search for MDS with a specific fitness function. Another hybrid genetic algorithm was proposed by Potluri and Singh @cite_42 . | {
"cite_N": [
"@cite_42",
"@cite_10"
],
"mid": [
"43777471",
"1539453972"
],
"abstract": [
"Minimum dominating set, which is an NP-hard problem, finds many practical uses in diverse domains. A greedy algorithm to compute the minimum dominating set is proven to be the optimal approximate algorithm unless P=NP. Meta-heuristics, generally, find solutions better than simple greedy approximate algorithms as they explore the search space better without incurring the cost of an exponential algorithm. However, there are hardly any studies of application of meta-heuristic techniques for this problem. In some applications it is important to minimize the dominating set as much as possible to reduce cost and or time to perform other operations based on the dominating set. In this paper, we propose a hybrid genetic algorithm and an ant-colony optimization (ACO) algorithm enhanced with local search. We compare the performance of these two hybrid algorithms against the solutions obtained using the greedy heuristic and another hybrid genetic algorithm proposed in literature. We find that the ACO algorithm enhanced with a minimization heuristic performs better than all other algorithms in almost all instances.",
"The minimum dominating set (MDS) problem is one of the central problems of algorithmic graph theory and has numerous applications especially in graph mining. In this paper, we propose a new hybrid method based on genetic algorithm (GA) to solve the MDS problem, called shortly HGA-MDS. The proposed method invokes a new fitness function to effectively measure the solution qualities. The search process in HGA-MDS uses local search and intensification schemes beside the GA search methodology in order to achieve faster performance. Finally, the performance of the HGA-MDS is compared with the standard GA. The new invoked design elements in HGA-MDS show its promising performance compared with standard GA."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | An ant colony algorithm for the minimum weight dominating set problem has been proposed by @cite_1 . Another ant colony optimisation algorithm hybridised with local search (ACO-LS) was proposed by Potluri and Singh @cite_42 . ACO-LS represents the basis for some of the very good heuristics currently available for MDS. Therefore, we will now provide more details on how this algorithm works. | {
"cite_N": [
"@cite_42",
"@cite_1"
],
"mid": [
"43777471",
"2249156935"
],
"abstract": [
"Minimum dominating set, which is an NP-hard problem, finds many practical uses in diverse domains. A greedy algorithm to compute the minimum dominating set is proven to be the optimal approximate algorithm unless P=NP. Meta-heuristics, generally, find solutions better than simple greedy approximate algorithms as they explore the search space better without incurring the cost of an exponential algorithm. However, there are hardly any studies of application of meta-heuristic techniques for this problem. In some applications it is important to minimize the dominating set as much as possible to reduce cost and or time to perform other operations based on the dominating set. In this paper, we propose a hybrid genetic algorithm and an ant-colony optimization (ACO) algorithm enhanced with local search. We compare the performance of these two hybrid algorithms against the solutions obtained using the greedy heuristic and another hybrid genetic algorithm proposed in literature. We find that the ACO algorithm enhanced with a minimization heuristic performs better than all other algorithms in almost all instances.",
"In this paper we present an application of ant colony optimization (ACO) to the Minimum Weighted Dominating Set Problem. We introduce a heuristic for this problem that takes into account the weights of vertexes being covered and show that it is more efficient than the greedy algorithm using the standard heuristic. Further we give implementation details of ACO applied to this problem. We tested our algorithm on graphs with different sizes, edge densities, and weight distribution functions and shown that it gives greatly improved results over these acquired by the greedy algorithms."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | This pheromone update rule has originally been used in an ant colony optimisation algorithm for the leaf-constrained minimum spanning tree problem @cite_7 . In this formula, @math is the best dominating set size for the current iteration and @math is the best dominating set size found so far. In ACO-LS, the values of parameters @math and @math were @math and @math @cite_42 . Hence, the vertices used in previously constructed dominating sets will be more likely to occur in the next dominating sets. | {
"cite_N": [
"@cite_42",
"@cite_7"
],
"mid": [
"43777471",
"2088797223"
],
"abstract": [
"Minimum dominating set, which is an NP-hard problem, finds many practical uses in diverse domains. A greedy algorithm to compute the minimum dominating set is proven to be the optimal approximate algorithm unless P=NP. Meta-heuristics, generally, find solutions better than simple greedy approximate algorithms as they explore the search space better without incurring the cost of an exponential algorithm. However, there are hardly any studies of application of meta-heuristic techniques for this problem. In some applications it is important to minimize the dominating set as much as possible to reduce cost and or time to perform other operations based on the dominating set. In this paper, we propose a hybrid genetic algorithm and an ant-colony optimization (ACO) algorithm enhanced with local search. We compare the performance of these two hybrid algorithms against the solutions obtained using the greedy heuristic and another hybrid genetic algorithm proposed in literature. We find that the ACO algorithm enhanced with a minimization heuristic performs better than all other algorithms in almost all instances.",
"Given an undirected, connected, weighted graph, the leaf-constrained minimum spanning tree (LCMST) problem seeks a spanning tree of the graph with smallest weight among all spanning trees of the graph, which contains at least l leaves. In this paper we have proposed two new metaheuristic approaches for the LCMST problem. One is an ant-colony optimization (ACO) algorithm, whereas the other is a tabu search based algorithm. Similar to a previously proposed genetic algorithm, these metaheuristic approaches also use the subset coding that represents a leaf-constrained spanning tree by the set of its interior vertices. Our new approaches perform well in comparison with two best heuristics reported in the literature for the problem — the subset-coded genetic algorithm and a greedy heuristic."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | In the application of a similar idea to the minimum weight dominating set problem, a preprocessing phase has been added to a similar algorithm called ACO-PP-LS @cite_3 . This algorithm extended ACO-LS by adding a routine of generating @math maximal independent sets using a greedy algorithm and improving the pheromone values of vertices in these independent sets. These independent sets were constructed by using a list of vertices available for adding to the independent set. In the beginining, a random vertex is added to the independent set. This vertex and its neighbours are then excluded from the list of available vertices. This is process iterated until there are no more vertices available. This preprocessing routine helps the algorithm to start with a probabilistic model that is not entirely random, leading to more rapid convergence for some instances. In ACO-PP-LS, slighly different values of parameters @math and @math have also been used. These were @math and @math . We will use the original parameterisations of ACO-LS @cite_42 and ACO-PP-LS @cite_3 in our further experimental investigations. | {
"cite_N": [
"@cite_42",
"@cite_3"
],
"mid": [
"43777471",
"2010300491"
],
"abstract": [
"Minimum dominating set, which is an NP-hard problem, finds many practical uses in diverse domains. A greedy algorithm to compute the minimum dominating set is proven to be the optimal approximate algorithm unless P=NP. Meta-heuristics, generally, find solutions better than simple greedy approximate algorithms as they explore the search space better without incurring the cost of an exponential algorithm. However, there are hardly any studies of application of meta-heuristic techniques for this problem. In some applications it is important to minimize the dominating set as much as possible to reduce cost and or time to perform other operations based on the dominating set. In this paper, we propose a hybrid genetic algorithm and an ant-colony optimization (ACO) algorithm enhanced with local search. We compare the performance of these two hybrid algorithms against the solutions obtained using the greedy heuristic and another hybrid genetic algorithm proposed in literature. We find that the ACO algorithm enhanced with a minimization heuristic performs better than all other algorithms in almost all instances.",
"Minimum weight dominating set (MWDS) finds many uses in solving problems as varied as clustering in wireless networks, multi-document summarization in information retrieval and so on. It is proven to be NP-hard, even for unit disk graphs. Many centralized and distributed, greedy and approximation algorithms have been proposed for the MWDS problem. However, all the approximation algorithms are limited to unit disk graphs which are primarily used to model wireless networks. This assumption fails when applied to other domains. In this paper, we present two metaheuristic algorithms - a hybrid genetic algorithm and a hybrid ant colony optimization algorithm - for the problem of computing minimum weight dominating set. We compare our results with that of a greedy heuristic as well as the only other metaheuristic algorithm proposed so far in the literature and show that our algorithms are far better than these algorithms."
]
} |
1705.00318 | 2611770526 | Abstract Dominating set is a set of vertices of a graph such that all other vertices have a neighbour in the dominating set. We propose a new order-based randomised local search (RLS o ) algorithm to solve minimum dominating set problem in large graphs. Experimental evaluation is presented for multiple types of problem instances. These instances include unit disk graphs, which represent a model of wireless networks, random scale-free networks, as well as samples from two social networks and real-world graphs studied in network science. Our experiments indicate that RLS o performs better than both a classical greedy approximation algorithm and two metaheuristic algorithms based on ant colony optimisation and local search. The order-based algorithm is able to find small dominating sets for graphs with tens of thousands of vertices. In addition, we propose a multi-start variant of RLS o that is suitable for solving the minimum weight dominating set problem. The application of RLS o in graph mining is also briefly demonstrated. | It is worth mentioning that algorithms based on ant colony optimisation are popular also in other variants of dominating set problems. A similar approach was proposed for the minimum connected dominating set problem @cite_27 . | {
"cite_N": [
"@cite_27"
],
"mid": [
"1978232925"
],
"abstract": [
"In this paper an ant colony optimization (ACO) algorithm for the minimum connected dominating set problem (MCDSP) is presented. The MCDSP become increasingly important in recent years due to its applicability to the mobile ad hoc networks (MANETs) and sensor grids. We have implemented a one-step ACO algorithm based on a known simple greedy algorithm that has a significant drawback of being easily trapped in local optima. We have shown that by adding a pheromone correction strategy and dedicating special attention to the initial condition of the ACO algorithm this negative effect can be avoided. Using this approach it is possible to achieve good results without using the complex two-step ACO algorithm previously developed. We have tested our method on standard benchmark data and shown that it is competitive to the existing algorithms. [Projekat Ministarstva nauke Republike Srbije, br. III-44006]"
]
} |
1705.00105 | 2766016831 | In this paper, we propose a novel ranking framework for collaborative filtering with the overall aim of learning user preferences over items by minimizing a pairwise ranking loss. We show the minimization problem involves dependent random variables and provide a theoretical analysis by proving the consistency of the empirical risk minimization in the worst case where all users choose a minimal number of positive and negative items. We further derive a Neural-Network model that jointly learns a new representation of users and items in an embedded space as well as the preference relation of users over the pairs of items. The learning objective is based on three scenarios of ranking losses that control the ability of the model to maintain the ordering over the items induced from the users' preferences, as well as, the capacity of the dot-product defined in the learned embedded space to produce the ordering. The proposed model is by nature suitable for implicit feedback and involves the estimation of only very few parameters. Through extensive experiments on several real-world benchmarks on implicit data, we show the interest of learning the preference and the embedding simultaneously when compared to learning those separately. We also demonstrate that our approach is very competitive with the best state-of-the-art collaborative filtering techniques proposed for implicit feedback. | Motivated by automatically tuning the parameters involved in the combination of different scoring functions, Learning-to-Rank approaches were originally developed for Information Retrieval (IR) tasks and are grouped into three main categories: pointwise, listwise and pairwise @cite_20 . | {
"cite_N": [
"@cite_20"
],
"mid": [
"2149427297"
],
"abstract": [
"Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank."
]
} |
1705.00105 | 2766016831 | In this paper, we propose a novel ranking framework for collaborative filtering with the overall aim of learning user preferences over items by minimizing a pairwise ranking loss. We show the minimization problem involves dependent random variables and provide a theoretical analysis by proving the consistency of the empirical risk minimization in the worst case where all users choose a minimal number of positive and negative items. We further derive a Neural-Network model that jointly learns a new representation of users and items in an embedded space as well as the preference relation of users over the pairs of items. The learning objective is based on three scenarios of ranking losses that control the ability of the model to maintain the ordering over the items induced from the users' preferences, as well as, the capacity of the dot-product defined in the learned embedded space to produce the ordering. The proposed model is by nature suitable for implicit feedback and involves the estimation of only very few parameters. Through extensive experiments on several real-world benchmarks on implicit data, we show the interest of learning the preference and the embedding simultaneously when compared to learning those separately. We also demonstrate that our approach is very competitive with the best state-of-the-art collaborative filtering techniques proposed for implicit feedback. | Perhaps the first Neural Network model for ranking is RankProp, originally proposed by @cite_16 . RankProp is a pointwise approach that alternates between two phases of learning the desired real outputs by minimizing a Mean Squared Error (MSE) objective, and a modification of the desired values themselves to reflect the current ranking given by the net. Later on @cite_14 proposed RankNet, a pairwise approach, that learns a preference function by minimizing a cross entropy cost over the pairs of relevant and irrelevant examples. SortNet proposed by @cite_2 @cite_21 also learns a preference function by minimizing a ranking loss over the pairs of examples that are selected iteratively with the overall aim of maximizing the quality of the ranking. The three approaches above consider the problem of Learning-to-Rank for IR and without learning an embedding. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_21",
"@cite_2"
],
"mid": [
"2143331230",
"2145365639",
"",
"1777164583"
],
"abstract": [
"We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.",
"A patient visits the doctor; the doctor reviews the patient's history, asks questions, makes basic measurements (blood pressure, ...), and prescribes tests or treatment. The prescribed course of action is based on an assessment of patient risk--patients at higher risk are given more and faster attention. It is also sequential--it is too expensive to immediately order all tests which might later be of value. This paper presents two methods that together improve the accuracy of backprop nets on a pneumonia risk assessment problem by 10-50 . Rankprop improves on backpropagation with sum of squares error in ranking patients by risk. Multitask learning takes advantage of future lab tests available in the training set, but not available in practice when predictions must be made. Both methods are broadly applicable.",
"",
"In this paper, we present a connectionist approach to preference learning. In particular, a neural network is trained to realize a comparison function, expressing the preference between two objects. Such a \"comparator\" can be subsequently integrated into a general ranking algorithm to provide a total ordering on some collection of objects. We evaluate the accuracy of the proposed approach using the LETOR benchmark, with promising preliminary results."
]
} |
1705.00370 | 2611790720 | Today's routing protocols critically rely on the assumption that the underlying hardware is trusted. Given the increasing number of attacks on network devices, and recent reports on hardware backdoors this assumption has become questionable. Indeed, with the critical role computer networks play today, the contrast between our security assumptions and reality is problematic. This paper presents Software-Defined Adversarial Trajectory Sampling (SoftATS), an OpenFlow-based mechanism to efficiently monitor packet trajectories, also in the presence of non-cooperating or even adversarial switches or routers, e.g., containing hardware backdoors. Our approach is based on a secure, redundant and adaptive sample distribution scheme which allows us to provably detect adversarial switches or routers trying to reroute, mirror, drop, inject, or modify packets (i.e., header and or payload). We evaluate the effectiveness of our approach in different adversarial settings, report on a proof-of-concept implementation, and provide a first evaluation of the performance overheads of such a scheme. | Interestingly, while over the last years, much research was conducted on how to secure routing protocols on the control plane @cite_55 @cite_36 @cite_32 @cite_8 @cite_16 , providing authenticity and correctness of topology propagation and route computation, the important question of how to secure the data plane has received much less attention so far. In fact, until very recently, researchers did not know whether it is possible to build a secure path verification mechanism @cite_43 . Many existing systems like VeriFlow @cite_3 , Anteater @cite_11 and Header Space Analysis @cite_44 rely either on flow rules installed at switches or on data plane configuration information to perform their analysis. This information can easily be manipulated in malicious settings. Providing fundamental properties like path consent and path compliance @cite_43 are usually based on cryptographic techniques: expensive operations in high speed networks. Even more challenging than inferring the routes along which certain packets actually travelled is to test where packets @cite_13 . | {
"cite_N": [
"@cite_8",
"@cite_36",
"@cite_55",
"@cite_32",
"@cite_3",
"@cite_44",
"@cite_43",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2003590000",
"",
"2018505857",
"",
"2122695394",
"1882012874",
"",
"2141544840",
"",
""
],
"abstract": [
"The Border Gateway Protocol (BGP), which is used to distribute routing information between autonomous systems (ASes), is a critical component of the Internet's routing infrastructure. It is highly vulnerable to a variety of malicious attacks, due to the lack of a secure means of verifying the authenticity and legitimacy of BGP control traffic. This paper describes a secure, scalable, deployable architecture (S-BGP) for an authorization and authentication system that addresses most of the security problems associated with BGP. The paper discusses the vulnerabilities and security requirements associated with BGP, describes the S-BGP countermeasures, and explains how they address these vulnerabilities and requirements. In addition, this paper provides a comparison of this architecture to other approaches that have been proposed, analyzes the performance implications of the proposed countermeasures, and addresses operational issues.",
"",
"Attacks against Internet routing are increasing in number and severity. Contributing greatly to these attacks is the absence of origin authentication: there is no way to validate claims of address ownership or location. The lack of such services enables not only attacks by malicious entities, but indirectly allow seemingly inconsequential miconfigurations to disrupt large portions of the Internet. This paper considers the semantics, design, and costs of origin authentication in interdomain routing. We formalize the semantics of address delegation and use on the Internet, and develop and characterize broad classes of origin authentication proof systems. We estimate the address delegation graph representing the current use of IPv4 address space using available routing data. This effort reveals that current address delegation is dense and relatively static: as few as 16 entities perform 80 of the delegation on the Internet. We conclude by evaluating the proposed services via traced based simulation. Our simulation shows the enhanced proof systems can reduce significantly reduce resource costs associated with origin authentication.",
"",
"Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion.",
"Today's networks typically carry or deploy dozens of protocols and mechanisms simultaneously such as MPLS, NAT, ACLs and route redistribution. Even when individual protocols function correctly, failures can arise from the complex interactions of their aggregate, requiring network administrators to be masters of detail. Our goal is to automatically find an important class of failures, regardless of the protocols running, for both operational and experimental networks. To this end we developed a general and protocol-agnostic framework, called Header Space Analysis (HSA). Our formalism allows us to statically check network specifications and configurations to identify an important class of failures such as Reachability Failures, Forwarding Loops and Traffic Isolation and Leakage problems. In HSA, protocol header fields are not first class entities; instead we look at the entire packet header as a concatenation of bits without any associated meaning. Each packet is a point in the 0,1 L space where L is the maximum length of a packet header, and networking boxes transform packets from one point in the space to another point or set of points (multicast). We created a library of tools, called Hassel, to implement our framework, and used it to analyze a variety of networks and protocols. Hassel was used to analyze the Stanford University backbone network, and found all the forwarding loops in less than 10 minutes, and verified reachability constraints between two subnets in 13 seconds. It also found a large and complex loop in an experimental loose source routing protocol in 4 minutes.",
"",
"Today's Internet routing protocols are built upon the basic incorrect assumption that routers propagate truthful routing information. As a result, the entire Internet infrastructure is vulnerable to security attacks from routers that propagate incorrect routing information. In fact, a single router is capable of hijacking a significant fraction of routes by launching such an attack. This issue is not just restricted to Internet routing protocols but is widely prevalent in several routing protocols that have been proposed in the research literature. Many existing approaches for addressing the security problems of routing protocols typically assume the existence of a Public-Key Infrastructure (PKI) or some form of prior key distribution mechanism along with a central authority. While a PKI does enable addressing this security threat, building one such key-distribution infrastructure may not always be feasible. One faces serious deployment barriers in building an Internet-wide PKI with a central authority especially given that deploying one such architecture requires approval across political and economic boundaries. Previous efforts for securing Internet routing and the Domain Name System using a PKI have not moved towards adoption. In this dissertation, we address the following question: Using purely decentralized mechanisms (void of a PKI and a central authority), what is the best level of security achievable for a routing protocol in the presence of adversaries? One of the key conclusions that we arrive at is the direct relationship between decentralized security and the reliable communication problem. The reliable communication problem relates to determining the constraints under which a set of good nodes in a network can reliably communicate messages between themselves in the face of adversarial nodes in the network. We show theoretical results on the constraints under which the reliable communication problem is solvable. Based on these results, we describe the design of a reliable communication toolkit that implements our algorithms and provides a suite of generic security primitives that can be used to secure a variety of routing protocols. These security mechanisms supported by the toolkit are well suited for Internet routing since they are both easy to deploy as well as offer good security guarantees. We also show that the toolkit has broader applicability beyond routing protocols to: (a) achieve decentralized key distribution; (b) address the data integrity threat to the Domain Name System (DNS) in a decentralized manner.",
"",
""
]
} |
1705.00370 | 2611790720 | Today's routing protocols critically rely on the assumption that the underlying hardware is trusted. Given the increasing number of attacks on network devices, and recent reports on hardware backdoors this assumption has become questionable. Indeed, with the critical role computer networks play today, the contrast between our security assumptions and reality is problematic. This paper presents Software-Defined Adversarial Trajectory Sampling (SoftATS), an OpenFlow-based mechanism to efficiently monitor packet trajectories, also in the presence of non-cooperating or even adversarial switches or routers, e.g., containing hardware backdoors. Our approach is based on a secure, redundant and adaptive sample distribution scheme which allows us to provably detect adversarial switches or routers trying to reroute, mirror, drop, inject, or modify packets (i.e., header and or payload). We evaluate the effectiveness of our approach in different adversarial settings, report on a proof-of-concept implementation, and provide a first evaluation of the performance overheads of such a scheme. | @cite_12 propose a hash-based delay sampling technique to detect switches misforwarding packets. Their threat model and objectives are closely related to ours. However, their sampling scheme and detection algorithm are different from ours. They require three switches to sample the same set of packets and their detection algorithm depends on the state of the switches buffers for a chosen path. We can incorporate their method of sampling i.e., choose triplets instead of pairs, but our detection is based on trajectories and not switch states. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2041800250"
],
"abstract": [
"The next-generation Internet promises to provide a fundamental shift in the underlying architecture to support dynamic deployment of network protocols. With the introduction of programmability and dynamic protocol deployment in routers, potential vulnerabilities and attacks are expected to increase. In this paper, we consider the problem of detecting packet forwarding misbehavior in routers. Specifically, we focus on an attack scenario, where a router selectively drops packets destined for another node. Detecting such an attack is challenging since it requires differentiating malicious packet drops from congestion-based packet losses. We propose a controller-based misbehavior detection technique that effectively detects malicious routers using a hash-based delay sampling and verification. We provide a performance analysis of the detection accuracy and quantify the performance overhead of our system. Our results show that our technique provides accurate detection with low sampling rates."
]
} |
1705.00370 | 2611790720 | Today's routing protocols critically rely on the assumption that the underlying hardware is trusted. Given the increasing number of attacks on network devices, and recent reports on hardware backdoors this assumption has become questionable. Indeed, with the critical role computer networks play today, the contrast between our security assumptions and reality is problematic. This paper presents Software-Defined Adversarial Trajectory Sampling (SoftATS), an OpenFlow-based mechanism to efficiently monitor packet trajectories, also in the presence of non-cooperating or even adversarial switches or routers, e.g., containing hardware backdoors. Our approach is based on a secure, redundant and adaptive sample distribution scheme which allows us to provably detect adversarial switches or routers trying to reroute, mirror, drop, inject, or modify packets (i.e., header and or payload). We evaluate the effectiveness of our approach in different adversarial settings, report on a proof-of-concept implementation, and provide a first evaluation of the performance overheads of such a scheme. | @cite_64 describe OpenSketch, a generic and efficient sketch-based measurement framework for SDN data planes. They developed APIs and sketches to make generic SDN measurements alleviate the control plane programming complexity operators face. Additionally, they present a prototype using NetFPGA to demonstrate the feasibility, applicability and overhead of their approach. Since their framework is designed to be parallel to the packet processing pipeline, there is no performance impact to the forwarding. However, matching OpenFlow flow rules and sending samples to the controller are not feasible. Nonetheless, their framework can be modified to implement our scheme. | {
"cite_N": [
"@cite_64"
],
"mid": [
"1858168446"
],
"abstract": [
"Most network management tasks in software-defined networks (SDN) involve two stages: measurement and control. While many efforts have been focused on network control APIs for SDN, little attention goes into measurement. The key challenge of designing a new measurement API is to strike a careful balance between generality (supporting a wide variety of measurement tasks) and efficiency (enabling high link speed and low cost). We propose a software defined traffic measurement architecture OpenSketch, which separates the measurement data plane from the control plane. In the data plane, OpenSketch provides a simple three-stage pipeline (hashing, filtering, and counting), which can be implemented with commodity switch components and support many measurement tasks. In the control plane, OpenSketch provides a measurement library that automatically configures the pipeline and allocates resources for different measurement tasks. Our evaluations of real-world packet traces, our prototype on NetFPGA, and the implementation of five measurement tasks on top of OpenSketch, demonstrate that OpenSketch is general, efficient and easily programmable."
]
} |
1705.00370 | 2611790720 | Today's routing protocols critically rely on the assumption that the underlying hardware is trusted. Given the increasing number of attacks on network devices, and recent reports on hardware backdoors this assumption has become questionable. Indeed, with the critical role computer networks play today, the contrast between our security assumptions and reality is problematic. This paper presents Software-Defined Adversarial Trajectory Sampling (SoftATS), an OpenFlow-based mechanism to efficiently monitor packet trajectories, also in the presence of non-cooperating or even adversarial switches or routers, e.g., containing hardware backdoors. Our approach is based on a secure, redundant and adaptive sample distribution scheme which allows us to provably detect adversarial switches or routers trying to reroute, mirror, drop, inject, or modify packets (i.e., header and or payload). We evaluate the effectiveness of our approach in different adversarial settings, report on a proof-of-concept implementation, and provide a first evaluation of the performance overheads of such a scheme. | The paper most closely related to ours is by @cite_52 . The authors present a smart generalization of trajectory sampling where hash values are shared between a subset of switches, rendering it hard for an adversary to avoid detection. The authors also present a first simulation of the possibility to detect packet drop, modifications and substitution attacks. Our paper builds upon this work in multiple respects. We initiate the study of injection attacks and observe that if combined with drop attacks, injection attacks introduce a number of more sophisticated attacks. Accordingly, we extend and formally and empirically analyze detection algorithm guarantees under various misbehaviors, including injections, mirroring, rerouting, or modifications of headers and or payloads. Unlike prior work, our hash assignment algorithm is completely random, eliminating bias for pairs and their assigned hash values. Moreover, our detection algorithm does not rely on an aggregation of trajectories and counters. Instead, we use the collector's (controller's) global view of the network (i.e., the network policy oracle) to compute every sampled packet's trajectory which gives us a per sample detection accuracy rather than an aggregate. We demonstrate that our detection algorithm can be parallelized using multiple threads (and possibly multiple collectors). | {
"cite_N": [
"@cite_52"
],
"mid": [
"2162618109"
],
"abstract": [
"Routing infrastructure plays a vital role in the Internet, and attacks on routers can be damaging. Compromised routers can drop, modify, mis-forward or reorder valid packets. Existing proposals for secure forwarding require substantial computational overhead and additional capabilities at routers. We propose secure split assignment trajectory sampling (SATS), a system that detects malicious routers on the data plane. SATS locates a set of suspicious routers when packets do not follow their predicted paths. It works with a traffic measurement platform using packet sampling, has low overhead on routers and is applicable to high-speed networks. Different subsets of packets are sampled over different groups of routers to ensure that an attacker cannot completely evade detection. Our evaluation shows that SATS can significantly limit a malicious router's harm to a small portion of traffic in a network"
]
} |
1705.00108 | 2610748790 | Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers. | TagLM was inspired by the widespread use of pre-trained word embeddings in supervised sequence tagging models. Besides pre-trained word embeddings, our method is most closely related to . Instead of using a LM, uses a probabilistic generative model to infer context-sensitive latent variables for each token, which are then used as extra features in a supervised CRF tagger . Other semi-supervised learning methods for structured prediction problems include co-training , expectation maximization , structural learning and maximum discriminant functions . It is easy to combine TagLM with any of the above methods by including LM embeddings as additional features in the discriminative components of the model (except for expectation maximization). A detailed discussion of semi-supervised learning methods in NLP can be found in @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2099153428"
],
"abstract": [
"This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees (\"this algorithm never does too badly\") than about useful rules of thumb (\"in this case this algorithm may perform really well\"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant. Table of Contents: Introduction Supervised and Unsupervised Prediction Semi-Supervised Learning Learning under Bias Learning under Unknown Bias Evaluating under Bias"
]
} |
1704.08992 | 2610293285 | In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, after which we use them for deep and hand-crafted feature extraction. In order to reduce the degree of patch-scale dependency, we also propose a multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with guidance from an edge-preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in terms of defocus estimation. Our work can be used for applications such as segmentation, blur magnification, all-in-focus image generation, and 3-D estimation. | Neural networks have proved their worth as algorithms superior to their conventional counterparts in many computer vision tasks, such as object and video classification @cite_48 @cite_29 , image restoration @cite_20 , image matting @cite_7 , image deconvolution @cite_21 , motion blur estimation @cite_36 , blur classification @cite_30 @cite_2 , super-resolution @cite_37 , salient region detection @cite_18 and edge-aware filtering @cite_40 . | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_18",
"@cite_7",
"@cite_36",
"@cite_48",
"@cite_29",
"@cite_21",
"@cite_40",
"@cite_2",
"@cite_20"
],
"mid": [
"",
"54257720",
"2953227099",
"2520247582",
"1916935112",
"2308045930",
"",
"2124964692",
"",
"",
"2154815154"
],
"abstract": [
"",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.",
"We propose a deep Convolutional Neural Networks (CNN) method for natural image matting. Our method takes results of the closed form matting, results of the KNN matting and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs, and reconstructed alpha mattes. We analyze pros and cons of the closed form matting, and the KNN matting in terms of local and nonlocal principle, and show that they are complementary to each other. A major benefit of our method is that it can “recognize” different local image structures, and then combine results of local (closed form matting), and nonlocal (KNN matting) matting effectively to achieve higher quality alpha mattes than both of its inputs. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. In addition, our method has achieved the highest ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors.",
"In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.",
"",
"",
"Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"",
"",
"Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions."
]
} |
1704.08992 | 2610293285 | In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, after which we use them for deep and hand-crafted feature extraction. In order to reduce the degree of patch-scale dependency, we also propose a multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with guidance from an edge-preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in terms of defocus estimation. Our work can be used for applications such as segmentation, blur magnification, all-in-focus image generation, and 3-D estimation. | Sun @cite_36 focus on motion blur kernel estimation. They use a CNN to estimate pre-defined discretized motion blur kernels. However, their approach requires rotational input augmentation and takes a considerable amount of time during the MRF propagation step. Aizenberg @cite_30 use a multilayer neural network based on multivalued neurons (MVN) for blur identification. The MVN learning step is computationally efficient. However, their neural network structure is quite simple. Yan and Shao @cite_49 adopt two-stage deep belief networks (DBN) to classify blur types and to identify blur parameters, but they only rely on features from the frequency domain. | {
"cite_N": [
"@cite_36",
"@cite_30",
"@cite_49"
],
"mid": [
"1916935112",
"",
"2131766072"
],
"abstract": [
"In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.",
"",
"This paper proposes a novel statistical approach to formulate image sharpness metric using eigenvalues. Statistical information of image content is represented effectively using a set of eigenvalues which is computed using singular value decomposition (SVD). The approach is started by normalizing the test image with its energy to minimize the effects of image contrast. Covariance matrix which is computed from the normalized image is then diagonalized using SVD to obtain its eigenvalues. Sharpness score of the test image is determined by taking the trace of the first six largest eigenvalues. The performance of the proposed approach is gauged by comparing it with orthogonal moments-based sharpness metrics. Experimental results show the advantages of the proposed approach in terms of providing wider working range and more precise prediction consistency in noisy condition."
]
} |
1704.08798 | 2611755161 | Words often convey affect -- emotions, feelings, and attitudes. Lexicons of word-affect association have applications in automatic emotion analysis and natural language generation. However, existing lexicons indicate only coarse categories of affect association. Here, for the first time, we create an affect intensity lexicon with real-valued scores of association. We use a technique called best-worst scaling that improves annotation consistency and obtains reliable fine-grained scores. The lexicon includes terms common from both general English and terms specific to social media communications. It has close to 6,000 entries for four basic emotions. We will be adding entries for other affect dimensions shortly. | There is a large body of work on creating valence or sentiment lexicons, including the General Inquirer @cite_2 , ANEW @cite_0 @cite_4 , MPQA @cite_11 , and norms lexicon by . The work on creating lexicons for categorical emotions such as joy, sadness, fear, etc, is comparatively small. WordNet Affect Lexicon @cite_18 has a few hundred words annotated with the emotions they evoke. http: wndomains.fbk.eu wnaffect.html It was created by manually identifying the emotions of a few seed words and then marking all their WordNet synonyms as having the same emotion. The NRC Emotion Lexicon was created by crowdsourcing and it includes entries for about 14,000 words and eight Plutchik emotions @cite_24 @cite_19 . http: www.purl.org net saif.mohammad research It also includes entries for positive and negative sentiment. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_0",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_11"
],
"mid": [
"2404480901",
"2151543699",
"2950974174",
"2040467972",
"2162010436",
"2082291422",
"2014902591"
],
"abstract": [
"In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named WORDNETAFFECT) was developed starting from WORDNET, through a selection and tagging of a subset of synsets representing the affective",
"",
"Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, -- a 'sentiment lexicon' or 'affective word lists'. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.",
"Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help to identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help to obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if a term evokes an emotion.",
"Even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how we create a high-quality, moderate-sized emotion lexicon using Mechanical Turk. In addition to questions about emotions evoked by terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech. We identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand.",
"",
"This paper describes a corpus annotation project to study issues in the manual annotation of opinions, emotions, sentiments, speculations, evaluations and other private states in language. The resulting corpus annotation scheme is described, as well as examples of its use. In addition, the manual annotation process and the results of an inter-annotator agreement study on a 10,000-sentence corpus of articles drawn from the world press are presented."
]
} |
1704.08798 | 2611755161 | Words often convey affect -- emotions, feelings, and attitudes. Lexicons of word-affect association have applications in automatic emotion analysis and natural language generation. However, existing lexicons indicate only coarse categories of affect association. Here, for the first time, we create an affect intensity lexicon with real-valued scores of association. We use a technique called best-worst scaling that improves annotation consistency and obtains reliable fine-grained scores. The lexicon includes terms common from both general English and terms specific to social media communications. It has close to 6,000 entries for four basic emotions. We will be adding entries for other affect dimensions shortly. | Best-Worst Scaling (BWS) was developed by , building on some ground-breaking research in the 1960’s in mathematical psychology and psychophysics by Anthony A. J. Marley and Duncan Luce. However, it is not well known outside the areas of choice modeling and marketing research. Within the NLP community, BWS has thus far been used for creating datasets for relational similarity @cite_15 , word-sense disambiguation @cite_6 , and word--sentiment intensity @cite_1 . In this work we use BWS to annotate words for intensity (or degree) of affect. With BWS we address the challenges of direct scoring, and produce more reliable emotion intensity scores. Further, this will be the first dataset that will also include emotion scores for words common in social media. | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_6"
],
"mid": [
"2098801107",
"",
"2250638193"
],
"abstract": [
"Up to now, work on semantic relations has focused on relation classification: recognizing whether a given instance (a word pair such as virus:flu) belongs to a specific relation class (such as CAUSE:EFFECT). However, instances of a single relation class may still have significant variability in how characteristic they are of that class. We present a new SemEval task based on identifying the degree of prototypicality for instances within a given class. As a part of the task, we have assembled the first dataset of graded relational similarity ratings across 79 relation categories. Three teams submitted six systems, which were evaluated using two methods.",
"",
"Word sense disambiguation aims to identify which meaning of a word is present in a given usage. Gathering word sense annotations is a laborious and difficult task. Several methods have been proposed to gather sense annotations using large numbers of untrained annotators, with mixed results. We propose three new annotation methodologies for gathering word senses where untrained annotators are allowed to use multiple labels and weight the senses. Our findings show that given the appropriate annotation task, untrained workers can obtain at least as high agreement as annotators in a controlled setting, and in aggregate generate equally as good of a sense labeling."
]
} |
1704.08798 | 2611755161 | Words often convey affect -- emotions, feelings, and attitudes. Lexicons of word-affect association have applications in automatic emotion analysis and natural language generation. However, existing lexicons indicate only coarse categories of affect association. Here, for the first time, we create an affect intensity lexicon with real-valued scores of association. We use a technique called best-worst scaling that improves annotation consistency and obtains reliable fine-grained scores. The lexicon includes terms common from both general English and terms specific to social media communications. It has close to 6,000 entries for four basic emotions. We will be adding entries for other affect dimensions shortly. | There is growing work on automatically determining word--emotion associations @cite_20 @cite_13 @cite_18 @cite_10 . These automatic methods often assign a real-valued score representing the degree of association. However, they have been evaluated on the class of emotion they assign to each word. With the NRC Affect Intensity Lexicon, one can evaluate how accurately the automatic methods capture affect intensity. | {
"cite_N": [
"@cite_10",
"@cite_18",
"@cite_13",
"@cite_20"
],
"mid": [
"2139517011",
"2404480901",
"2740540254",
""
],
"abstract": [
"An emotion lexicon is an indispensable resource for emotion analysis. This paper aims to mine the relationships between words and emotions using weblog corpora. A collocation model is proposed to learn emotion lexicons from weblog articles. Emotion classification at sentence level is experimented by using the mined lexicons to demonstrate their usefulness.",
"In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named WORDNETAFFECT) was developed starting from WORDNET, through a selection and tagging of a subset of synsets representing the affective",
"This paper examines the task of detecting intensity of emotion from text. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities. We use a technique called best--worst scaling (BWS) that improves annotation consistency and obtains reliable fine-grained scores. We show that emotion-word hashtags often impact emotion intensity, usually conveying a more intense emotion. Finally, we create a benchmark regression system and conduct experiments to determine: which features are useful for detecting emotion intensity, and, the extent to which two emotions are similar in terms of how they manifest in language.",
""
]
} |
1704.08792 | 2610817424 | In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available | Architecture search has received renewed interest recently. Wierstra al @cite_20 , Floreano al @cite_17 , and Real al @cite_29 use evolutionary algorithms which start from an initial model and evolve it based on its validation performance. Zoph and Le @cite_24 propose a reinforcement learning procedure based on policy gradient for searching for convolutional and LSTM architectures. Baker al @cite_5 propose a reinforcement learning procedure based on Q-learning for searching for convolutional architectures. | {
"cite_N": [
"@cite_29",
"@cite_24",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"2949264490",
"2963374479",
"2556833785",
"2125303539",
"2171658832"
],
"abstract": [
"Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6 (95.6 for ensemble) and 77.0 , respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"Existing Recurrent Neural Networks (RNNs) are limited in their ability to model dynamical systems with nonlinearities and hidden internal states. Here we use our general framework for sequence learning, EVOlution of recurrent systems with LINear Outputs (Evolino), to discover good RNN hidden node weights through evolution, while using linear regression to compute an optimal linear mapping from hidden state to output. Using the Long Short-Term Memory RNN Architecture, Evolino outperforms previous state-of-the-art methods on several tasks: 1) context-sensitive languages, 2) multiple superimposed sine waves.",
"Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern clas- sification to robot control. In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic solution to these problems. New insights in both neuroscience and evolu- tionary biology have led to the development of increasingly powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures."
]
} |
1704.08792 | 2610817424 | In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available | TPE is a general hyperparameter search algorithm, and therefore requires considerable effort to use---for any fixed model search space, using TPE requires the human expert to distill the hyperparameters of the search space, express the search space in Hyperopt @cite_25 (an implementation of TPE), and write the code describing how values of the hyperparameters in the search space compile to a computational graph. In contrast, our language is modular and composable in the sense that: search spaces (defined through modules) are constructed compositionally out of simpler search spaces (i.e., simpler modules); hyperparameters for composite modules are derived automatically from the hyperparameters of simpler modules; once values for all hyperparameters of a module have been chosen, the resulting model can be automatically mapped to a computational graph without the human expert having to write additional code. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2113207845"
],
"abstract": [
"Many computer vision algorithms depend on configuration settings that are typically hand-tuned in the course of evaluating the algorithm for a particular data set. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to realizing a method's full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to quantify or describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it is sometimes difficult to know whether a given technique is genuinely better, or simply better tuned. In this work, we propose a meta-modeling approach to support automated hyperparameter optimization, with the goal of providing practical tools that replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from hyperparameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyperparameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single broad class of feed-forward vision architectures."
]
} |
1704.08723 | 2611936121 | Despite the rapid progress, existing works on action understanding focus strictly on one type of action agent, which we call actor---a human adult, ignoring the diversity of actions performed by other actors. To overcome this narrow viewpoint, our paper marks the first effort in the computer vision community to jointly consider algorithmic understanding of various types of actors undergoing various actions. To begin with, we collect a large annotated Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally untrimmed long videos. We formulate the general actor-action understanding problem and instantiate it at various granularities: video-level single- and multiple-label actor-action recognition, and pixel-level actor-action segmentation. We propose and examine a comprehensive set of graphical models that consider the various types of interplay among actors and actions. Our findings have led us to conclusive evidence that the joint modeling of actor and action improves performance over modeling each of them independently, and further improvement can be obtained by considering the multi-scale natural in video understanding. Hence, our paper concludes the argument of the value of explicit consideration of various actors in comprehensive action understanding and provides a dataset and a benchmark for later works exploring this new problem. | Our work explores a new dimension in fine-grained action understanding. A related work is @cite_59 , where they focus on finding different human in movies, but these are the actor-names and not different types of actors, like and as we consider in this paper. @cite_5 , study egocentric animal activities, such as turn head, walk and body shake, from a view-point of an animal---a dog. Our work differs from them by explicitly considering the types of actors in modeling actions. Similarly, our work also differs from the existing works on actions and objects, such as @cite_38 @cite_26 , which are strictly focused on interactions between human actors manipulating various physical objects. | {
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_59",
"@cite_38"
],
"mid": [
"2081490348",
"",
"2119031011",
"2169393274"
],
"abstract": [
"This paper introduces the concept of first-person animal activity recognition, the problem of recognizing activities from a view-point of an animal (e.g., a dog). Similar to first-person activity recognition scenarios where humans wear cameras, our approach estimates activities performed by an animal wearing a camera. This enables monitoring and understanding of natural animal behaviors even when there are no people around them. Its applications include automated logging of animal behaviors for medical biology experiments, monitoring of pets, and investigation of wildlife patterns. In this paper, we construct a new dataset composed of first-person animal videos obtained by mounting a camera on each of the four pet dogs. Our new dataset consists of 10 activities containing a heavy fair amount of ego-motion. We implemented multiple baseline approaches to recognize activities from such videos while utilizing multiple types of global local motion features. Animal ego-actions as well as human-animal interactions are recognized with the baseline approaches, and we discuss experimental results.",
"",
"We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information."
]
} |
1704.08723 | 2611936121 | Despite the rapid progress, existing works on action understanding focus strictly on one type of action agent, which we call actor---a human adult, ignoring the diversity of actions performed by other actors. To overcome this narrow viewpoint, our paper marks the first effort in the computer vision community to jointly consider algorithmic understanding of various types of actors undergoing various actions. To begin with, we collect a large annotated Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally untrimmed long videos. We formulate the general actor-action understanding problem and instantiate it at various granularities: video-level single- and multiple-label actor-action recognition, and pixel-level actor-action segmentation. We propose and examine a comprehensive set of graphical models that consider the various types of interplay among actors and actions. Our findings have led us to conclusive evidence that the joint modeling of actor and action improves performance over modeling each of them independently, and further improvement can be obtained by considering the multi-scale natural in video understanding. Hence, our paper concludes the argument of the value of explicit consideration of various actors in comprehensive action understanding and provides a dataset and a benchmark for later works exploring this new problem. | In addition to the increased diversity in acitivities, the literature in action understanding is moving away from simple video classification to fine-grained output. For example, methods like @cite_68 @cite_70 detect human actions with bounding box tubes or 3D volumes in videos; and methods like @cite_60 @cite_54 even provide pixel-level segmentation of human actions. The shift of research interest is also observed in the relaxed assumptions of videos. For instance, temporally untrimmed videos are considered in @cite_61 and @cite_46 , and online streaming videos are explored in @cite_2 and @cite_8 . Our work analyzes actors and actions at various granularities, e.g., both video-level and pixel-level, and we experiment with both short and long videos. | {
"cite_N": [
"@cite_61",
"@cite_60",
"@cite_70",
"@cite_8",
"@cite_54",
"@cite_2",
"@cite_46",
"@cite_68"
],
"mid": [
"2394849137",
"1912148408",
"",
"2951570213",
"",
"2460134573",
"2529163075",
"2095661305"
],
"abstract": [
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.",
"Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation.",
"",
"In online action detection, the goal is to detect the start of an action in a video stream as soon as it happens. For instance, if a child is chasing a ball, an autonomous car should recognize what is going on and respond immediately. This is a very challenging problem for four reasons. First, only partial actions are observed. Second, there is a large variability in negative data. Third, the start of the action is unknown, so it is unclear over what time window the information should be integrated. Finally, in real world data, large within-class variability exists. This problem has been addressed before, but only to some extent. Our contributions to online action detection are threefold. First, we introduce a realistic dataset composed of 27 episodes from 6 popular TV series. The dataset spans over 16 hours of footage annotated with 30 action classes, totaling 6,231 action instances. Second, we analyze and compare various baseline methods, showing this is a challenging problem for which none of the methods provides a good solution. Third, we analyze the change in performance when there is a variation in viewpoint, occlusion, truncation, etc. We introduce an evaluation protocol for fair comparison. The dataset, the baselines and the models will all be made publicly available to encourage (much needed) further research on online action detection on realistic data.",
"",
"This paper proposes a novel approach to tackle the challenging problem of 'online action localization' which entails predicting actions and their locations as they happen in a video. Typically, action localization or recognition is performed in an offline manner where all the frames in the video are processed together and action labels are not predicted for the future. This disallows timely localization of actions - an important consideration for surveillance tasks. In our approach, given a batch of frames from the immediate past in a video, we estimate pose and oversegment the current frame into superpixels. Next, we discriminatively train an actor foreground model on the superpixels using the pose bounding boxes. A Conditional Random Field with superpixels as nodes, and edges connecting spatio-temporal neighbors is used to obtain action segments. The action confidence is predicted using dynamic programming on SVM scores obtained on short segments of the video, thereby capturing sequential information of the actions. The issue of visual drift is handled by updating the appearance model and pose refinement in an online manner. Lastly, we introduce a new measure to quantify the performance of action prediction (i.e. online action localization), which analyzes how the prediction accuracy varies as a function of observed portion of the video. Our experiments suggest that despite using only a few frames to localize actions at each time instant, we are able to predict the action and obtain competitive results to state-of-the-art offline methods.",
"We investigate the feature design and classification architectures in temporal action localization. This application focuses on detecting and labeling actions in untrimmed videos, which brings more challenge than classifying presegmented videos. The major difficulty for action localization is the uncertainty of action occurrence and utilization of information from different scales. Two innovations are proposed to address this issue. First, we propose a Pyramid of Score Distribution Feature (PSDF) to capture the motion information at multiple resolutions centered at each detection window. This novel feature mitigates the influence of unknown action position and duration, and shows significant performance gain over previous detection approaches. Second, inter-frame consistency is further explored by incorporating PSDF into the state-of-the-art Recurrent Neural Networks, which gives additional performance gain in detecting actions in temporally untrimmed videos. We tested our action localization framework on the THUMOS'15 and MPII Cooking Activities Dataset, both of which show a large performance improvement over previous attempts.",
"Deformable part models have achieved impressive performance for object detection, even on difficult image datasets. This paper explores the generalization of deformable part models from 2D images to 3D spatiotemporal volumes to better study their effectiveness for action detection in video. Actions are treated as spatiotemporal patterns and a deformable part model is generated for each action from a collection of examples. For each action model, the most discriminative 3D sub volumes are automatically selected as parts and the spatiotemporal relations between their locations are learned. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. Extensive experiments on several video datasets demonstrate the strength of spatiotemporal DPMs for classifying and localizing actions."
]
} |
1704.08723 | 2611936121 | Despite the rapid progress, existing works on action understanding focus strictly on one type of action agent, which we call actor---a human adult, ignoring the diversity of actions performed by other actors. To overcome this narrow viewpoint, our paper marks the first effort in the computer vision community to jointly consider algorithmic understanding of various types of actors undergoing various actions. To begin with, we collect a large annotated Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally untrimmed long videos. We formulate the general actor-action understanding problem and instantiate it at various granularities: video-level single- and multiple-label actor-action recognition, and pixel-level actor-action segmentation. We propose and examine a comprehensive set of graphical models that consider the various types of interplay among actors and actions. Our findings have led us to conclusive evidence that the joint modeling of actor and action improves performance over modeling each of them independently, and further improvement can be obtained by considering the multi-scale natural in video understanding. Hence, our paper concludes the argument of the value of explicit consideration of various actors in comprehensive action understanding and provides a dataset and a benchmark for later works exploring this new problem. | We now discuss related work in segmentation, which is a major emphasis of our broader view of action understanding, e.g., the actor-action segmentation task. Semantic segmentation methods can now densely label more than a dozen classes in images @cite_47 @cite_11 and videos @cite_69 @cite_13 undergoing rapid motion; semantic segmentation methods have even been unified with object detectors and scene classifiers @cite_67 , extended to 3D @cite_45 and posed jointly with attributes @cite_44 , stereo @cite_28 and SFM @cite_31 . Although the underlying optimization problems in these methods tend to be expensive, average per-class accuracy scores have significantly increased, e.g., from 67 Other related works include video object segmentation @cite_42 @cite_9 and joint temporal segmentation with action recognition @cite_34 . The video object segmentation methods are class-independent and assume a single dominant object (actor) in the video; they are hence not directly comparable to our work although one can foresee a potential method using video object segmentation as a precursor to the actor-action understanding problem. | {
"cite_N": [
"@cite_69",
"@cite_67",
"@cite_31",
"@cite_28",
"@cite_9",
"@cite_42",
"@cite_44",
"@cite_45",
"@cite_47",
"@cite_34",
"@cite_13",
"@cite_11"
],
"mid": [
"2083597815",
"2137881638",
"1913356549",
"2020045638",
"",
"1989348325",
"2082394523",
"801273237",
"2535516436",
"1978511849",
"",
""
],
"abstract": [
"",
"In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance.",
"The problems of dense stereo reconstruction and object class segmentation can both be formulated as Random Field labeling problems, in which every pixel in the image is assigned a label corresponding to either its disparity, or an object class such as road or building. While these two problems are mutually informative, no attempt has been made to jointly optimize their labelings. In this work we provide a flexible framework configured via cross-validation that unifies the two problems and demonstrate that, by resolving ambiguities, which would be present in real world data if the two problems were considered separately, joint optimization of the two problems substantially improves performance. To evaluate our method, we augment the Leuven data set ( http: cms.brookes.ac.uk research visiongroup files Leuven.zip ), which is a stereo video shot from a car driving around the streets of Leuven, with 70 hand labeled object class and disparity maps. We hope that the release of these annotations will stimulate further work in the challenging domain of street-view analysis. Complete source code is publicly available ( http: cms.brookes.ac.uk staff Philip-Torr ale.htm ).",
"",
"We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.",
"The concepts of objects and attributes are both important for describing images precisely, since verbal descriptions often contain both adjectives and nouns (e.g. 'I see a shiny red chair'). In this paper, we formulate the problem of joint visual attribute and object class image segmentation as a dense multi-labelling problem, where each pixel in an image can be associated with both an object-class and a set of visual attributes labels. In order to learn the label correlations, we adopt a boosting-based piecewise training approach with respect to the visual appearance and co-occurrence cues. We use a filtering-based mean-field approximation approach for efficient joint inference. Further, we develop a hierarchical model to incorporate region-level object and attribute information. Experiments on the aPASCAL, CORE and attribute augmented NYU indoor scenes datasets show that the proposed approach is able to achieve state-of-the-art results.",
"We present an approach for joint inference of 3D scene structure and semantic labeling for monocular video. Starting with monocular image stream, our framework produces a 3D volumetric semantic + occupancy map, which is much more useful than a series of 2D semantic label images or a sparse point cloud produced by traditional semantic segmentation and Structure from Motion(SfM) pipelines respectively. We derive a Conditional Random Field (CRF) model defined in the 3D space, that jointly infers the semantic category and occupancy for each voxel. Such a joint inference in the 3D CRF paves the way for more informed priors and constraints, which is otherwise not possible if solved separately in their traditional frameworks. We make use of class specific semantic cues that constrain the 3D structure in areas, where multiview constraints are weak. Our model comprises of higher order factors, which helps when the depth is unobservable.We also make use of class specific semantic cues to reduce either the degree of such higher order factors, or to approximately model them with unaries if possible. We demonstrate improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences.",
"Most methods for object class segmentation are formulated as a labelling problem over a single choice of quantisation of an image space - pixels, segments or group of segments. It is well known that each quantisation has its fair share of pros and cons; and the existence of a common optimal quantisation level suitable for all object categories is highly unlikely. Motivated by this observation, we propose a hierarchical random field model, that allows integration of features computed at different levels of the quantisation hierarchy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalises much of the previous work based on pixels or segments. We evaluate its efficiency on some of the most challenging data-sets for object class segmentation, and show it obtains state-of-the-art results.",
"Automatic video segmentation and action recognition has been a long-standing problem in computer vision. Much work in the literature treats video segmentation and action recognition as two independent problems; while segmentation is often done without a temporal model of the activity, action recognition is usually performed on pre-segmented clips. In this paper we propose a novel method that avoids the limitations of the above approaches by jointly performing video segmentation and action recognition. Unlike standard approaches based on extensions of dynamic Bayesian networks, our method is based on a discriminative temporal extension of the spatial bag-of-words model that has been very popular in object recognition. The classification is performed robustly within a multi-class SVM framework whereas the inference over the segments is done efficiently with dynamic programming. Experimental results on honeybee, Weizmann, and Hollywood datasets illustrate the benefits of our approach compared to state-of-the-art methods.",
"",
""
]
} |
1704.08723 | 2611936121 | Despite the rapid progress, existing works on action understanding focus strictly on one type of action agent, which we call actor---a human adult, ignoring the diversity of actions performed by other actors. To overcome this narrow viewpoint, our paper marks the first effort in the computer vision community to jointly consider algorithmic understanding of various types of actors undergoing various actions. To begin with, we collect a large annotated Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally untrimmed long videos. We formulate the general actor-action understanding problem and instantiate it at various granularities: video-level single- and multiple-label actor-action recognition, and pixel-level actor-action segmentation. We propose and examine a comprehensive set of graphical models that consider the various types of interplay among actors and actions. Our findings have led us to conclusive evidence that the joint modeling of actor and action improves performance over modeling each of them independently, and further improvement can be obtained by considering the multi-scale natural in video understanding. Hence, our paper concludes the argument of the value of explicit consideration of various actors in comprehensive action understanding and provides a dataset and a benchmark for later works exploring this new problem. | Multi-task learning has been effective in many applications, such as object detection @cite_7 , and classification @cite_19 . The goal is to learn models or shared representations jointly that outperforms learning them separately for each task. Recently, multi-task learning has also been adapted to action classification. For example, in @cite_32 , classify human actions in videos with shared latent tasks. Our paper differs from them by explicitly modeling the relationship and interactions among actors and actions under a unified graphical model. Notice that it is possible to use multi-task learning to train a shared deep representation for actors and actions extending frameworks such as the multi-task network cascades @cite_51 , which has a different focus than this work; hence, we leave it to future work. | {
"cite_N": [
"@cite_19",
"@cite_51",
"@cite_32",
"@cite_7"
],
"mid": [
"",
"2949295283",
"2156392723",
"2010132303"
],
"abstract": [
"",
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.",
"Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"We present a hierarchical classification model that allows rare objects to borrow statistical strength from related objects that have many training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. Our experimental results on the challenging object localization and detection task demonstrate that the proposed model substantially improves the accuracy of the standard single object detectors that ignore hierarchical structure altogether."
]
} |
1704.08821 | 2611805410 | A discriminative ensemble tracker employs multiple classifiers, each of which casts a vote on all of the obtained samples. The votes are then aggregated in an attempt to localize the target object. Such method relies on collective competence and the diversity of the ensemble to approach the target non-target classification task from different views. However, by updating all of the ensemble using a shared set of samples and their final labels, such diversity is lost or reduced to the diversity provided by the underlying features or internal classifiers' dynamics. Additionally, the classifiers do not exchange information with each other while striving to serve the collective goal, i.e., better classification. In this study, we propose an active collaborative information exchange scheme for ensemble tracking. This, not only orchestrates different classifier towards a common goal but also provides an intelligent update mechanism to keep the diversity of classifiers and to mitigate the shortcomings of one with the others. The data exchange is optimized with regard to an ensemble uncertainty utility function, and the ensemble is updated via co-training. The evaluations demonstrate promising results realized by the proposed algorithm for the real-world online tracking. | Using a (linear) combination of several (weak) classifiers with different associated weights has been proposed in a seminal work by Avidan @cite_32 . Align with this study, constructing an ensemble by boosting @cite_7 , online boosting @cite_17 @cite_10 , multi-class boosting @cite_9 and multi-instance boosting @cite_47 @cite_3 provides better and better performance for ensemble trackers. The boosting may or may not couple with the ensemble changes such as feature adjustment @cite_22 or addition deletion of the ensemble's members @cite_7 @cite_45 . To date, boosting has been widely used in self-learning based tracking methods despite its low endurance against label noise @cite_27 . An alternative way to tune the weights of the weights of an ensemble is via a Bayesian treatment @cite_35 . Aside from using different features, the members of an ensemble may be constructed from randomized subsets of training data @cite_36 or different time snapshots of a classifier evolving by time @cite_38 . | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_9",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_45",
"@cite_47",
"@cite_10",
"@cite_17"
],
"mid": [
"2047237558",
"182940129",
"2125337786",
"2000326692",
"",
"1601728199",
"",
"2009243364",
"2016309246",
"2162152641",
"2167089254",
"",
"1529840045"
],
"abstract": [
"Appearance model is a key component of tracking algorithms. Most existing approaches utilize the object information contained in the current and previous frames to construct the object appearance model and locate the object with the model in frame t + 1. This method may work well if the object appearance just fluctuates in short time intervals. Nevertheless, suboptimal locations will be generated in frame t + 1 if the visual appearance changes substantially from the model. Then, continuous changes would accumulate errors and finally result in a tracking failure. To copy with this problem, in this paper we propose a novel algorithm — online Laplacian ranking support vector tracker (LRSVT) — to robustly locate the object. The LRSVT incorporates the labeled information of the object in the initial and the latest frames to resist the occlusion and adapt to the fluctuation of the visual appearance, and the weakly labeled information from frame t + 1 to adapt to substantial changes of the appearance. Extensive experiments on public benchmark sequences show the superior performance of LRSVT over some state-of-the-art tracking algorithms.",
"We propose a multi-expert restoration scheme to address the model drift problem in online tracking. In the proposed scheme, a tracker and its historical snapshots constitute an expert ensemble, where the best expert is selected to restore the current tracker when needed based on a minimum entropy criterion, so as to correct undesirable model updates. The base tracker in our formulation exploits an online SVM on a budget algorithm and an explicit feature mapping method for efficient model update and inference. In experiments, our tracking method achieves substantially better overall performance than 32 trackers on a benchmark dataset of 50 video sequences under various evaluation settings. In addition, in experiments with a newly collected dataset of challenging sequences, we show that the proposed multi-expert restoration scheme significantly improves the robustness of our base tracker, especially in scenarios with frequent occlusions and repetitive appearance variations.",
"The paper introduces Hough forests, which are random forests adapted to perform a generalized Hough transform in an efficient way. Compared to previous Hough-based systems such as implicit shape models, Hough forests improve the performance of the generalized Hough transform for object detection on a categorical level. At the same time, their flexibility permits extensions of the Hough transform to new domains such as object tracking and action recognition. Hough forests can be regarded as task-adapted codebooks of local appearance that allow fast supervised training and fast matching at test time. They achieve high detection accuracy since the entries of such codebooks are optimized to cast Hough votes with small variance and since their efficiency permits dense sampling of local image patches or video cuboids during detection. The efficacy of Hough forests for a set of computer vision tasks is validated through experiments on a large set of publicly available benchmark data sets and comparisons with the state-of-the-art.",
"Thermally stable light distillate turbine fuel composition containing 1) a substituted carbamate and 2) an aldehyde-amine condensation product, and a method for operating a turbine engine.",
"",
"Many learning tasks for computer vision problems can be described by multiple views or multiple features. These views can be exploited in order to learn from unlabeled data, a.k.a. \"multi-view learning\". In these methods, usually the classifiers iteratively label each other a subset of the unlabeled data and ignore the rest. In this work, we propose a new multi-view boosting algorithm that, unlike other approaches, specifically encodes the uncertainties over the unlabeled samples in terms of given priors. Instead of ignoring the unlabeled samples during the training phase of each view, we use the different views to provide an aggregated prior which is then used as a regularization term inside a semisupervised boosting method. Since we target multi-class applications, we first introduce a multi-class boosting algorithm based on maximizing the mutli-class classification margin. Then, we propose our multi-class semisupervised boosting algorithm which is able to use priors as a regularization component over the unlabeled data. Since the priors may contain a significant amount of noise, we introduce a new loss function for the unlabeled regularization which is robust to noisy priors. Experimentally, we show that the multi-class boosting algorithms achieves state-of-theart results in machine learning benchmarks. We also show that the new proposed loss function is more robust compared to other alternatives. Finally, we demonstrate the advantages of our multi-view boosting approach for object category recognition and visual object tracking tasks, compared to other multi-view learning methods.",
"",
"Adaptive tracking-by-detection methods have been widely studied with promising results. These methods first train a classifier in an online manner. Then, a sliding window is used to extract some samples from the local regions surrounding the former object location at the new frame. The classifier is then applied to these samples where the location of sample with maximum classifier score is the new object location. However, such classifier may be inaccurate when the training samples are imprecise which causes drift. Multiple instance learning (MIL) method is recently introduced into the tracking task, which can alleviate drift to some extent. However, the MIL tracker may detect the positive sample that is less important because it does not discriminatively consider the sample importance in its learning procedure. In this paper, we present a novel online weighted MIL (WMIL) tracker. The WMIL tracker integrates the sample importance into an efficient online learning procedure by assuming the most important sample (i.e., the tracking result in current frame) is known when training the classifier. A new bag probability function combining the weighted instance probability is proposed via which the sample importance is considered. Then, an efficient online approach is proposed to approximately maximize the bag likelihood function, leading to a more robust and much faster tracker. Experimental results on various benchmark video sequences demonstrate the superior performance of our algorithm to state-of-the-art tracking algorithms.",
"Tracking-by-detection is increasingly popular in order to tackle the visual tracking problem. Existing adaptive methods suffer from the drifting problem, since they rely on self-updates of an on-line learning method. In contrast to previous work that tackled this problem by employing semi-supervised or multiple-instance learning, we show that augmenting an on-line learning method with complementary tracking approaches can lead to more stable results. In particular, we use a simple template model as a non-adaptive and thus stable component, a novel optical-flow-based mean-shift tracker as highly adaptive element and an on-line random forest as moderately adaptive appearance-based learner. We combine these three trackers in a cascade. All of our components run on GPUs or similar multi-core systems, which allows for real-time performance. We show the superiority of our system over current state-of-the-art tracking methods in several experiments on publicly available data.",
"Random Forests (RFs) are frequently used in many computer vision and machine learning applications. Their popularity is mainly driven by their high computational efficiency during both training and evaluation while achieving state-of-the-art results. However, in most applications RFs are used off-line. This limits their usability for many practical problems, for instance, when training data arrives sequentially or the underlying distribution is continuously changing. In this paper, we propose a novel on-line random forest algorithm. We combine ideas from on-line bagging, extremely randomized forests and propose an on-line decision tree growing procedure. Additionally, we add a temporal weighting scheme for adaptively discarding some trees based on their out-of-bag-error in given time intervals and consequently growing of new trees. The experiments on common machine learning data sets show that our algorithm converges to the performance of the off-line RF. Additionally, we conduct experiments for visual tracking, where we demonstrate real-time state-of-the-art performance on well-known scenarios and show good performance in case of occlusions and appearance changes where we outperform trackers based on on-line boosting. Finally, we demonstrate the usability of on-line RFs on the task of interactive real-time segmentation.",
"In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.",
"",
"Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time."
]
} |
1704.08944 | 2588983561 | Color and intensity are two important components in an image. Usually, groups of image pixels, which are similar in color or intensity, are an informative representation for an object. They are therefore particularly suitable for computer vision tasks, such as saliency detection and object proposal generation. However, image pixels, which share a similar real-world color, may be quite different since colors are often distorted by intensity. In this paper, we reinvestigate the affinity matrices originally used in image segmentation methods based on spectral clustering. A new affinity matrix, which is robust to color distortions, is formulated for object discovery. Moreover, a cohesion measurement (CM) for object regions is also derived based on the formulated affinity matrix. Based on the new CM, a novel object discovery method is proposed to discover objects latent in an image by utilizing the eigenvectors of the affinity matrix. Then we apply the proposed method to both saliency detection and object proposal generation. Experimental results on several evaluation benchmarks demonstrate that the proposed CM-based method has achieved promising performance for these two tasks. | An affinity matrix plays an important role in the spectral graph theory @cite_14 . For example, @cite_38 propose to solve the perceptual grouping problem using normalized cuts. They set up a weighted graph and set the weight of each edge connecting two nodes to be a measure of the affinity between the two nodes. Then image segmentation is treated as a graph partitioning problem, which is to find the minimum cut of a graph by solving a generalized eigenvalue problem on the constructed Laplacian matrix. The second-to-last eigenvectors turn out to be indicator vectors for partitioning the graph. @cite_30 consider an asymmetric variant of the cost function in @cite_38 , and define one of the two subsets of a graph to be the foreground and define its complement to be the background. Based on the modified normalized cuts, they derive a foreground cut method by performing affinity factorization on the affinity matrix. The first eigenvector of the affinity matrix with the largest eigenvalue contains salient objects. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_14"
],
"mid": [
"2976981929",
"2121947440",
""
],
"abstract": [
"The foreground group in a scene may be 'discovered' and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a 'saliency' index, and the foreground of the scene may be obtained by thresholding it. An algorithm called 'affinity factorization' is thus obtained which may be used for grouping. The affinity factorization algorithm is demonstrated on displays composed of points, of lines and of brightness values. Its relationship to the Shi-Malik normalized cuts algorithms is explored both analytically and experimentally. The affinity factorization algorithm is shown to be computationally efficient (O(n) floating-point operations for a scene composed of n elements) and to perform well on displays where the background is unstructured. Generalizations to solve more complex problems are also discussed.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
""
]
} |
1704.08713 | 2610965652 | The number of nodes of a network, called its size, is one of the most important network parameters. A radio network is a collection of stations, called nodes, with wireless transmission and receiving capabilities. It is modeled as a simple connected undirected graph whose nodes communicate in synchronous rounds. In each round, a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. If @math listens in a round, and two or more neighbors of @math transmit in this round, a collision occurs at @math . Two scenarios are considered in the literature: if nodes can distinguish collision from silence (the latter occurs when no neighbor transmits), we say that the network has the collision detection capability, otherwise there is no collision detection. We consider the task of size discovery: finding the size of an unknown radio network with collision detection. All nodes have to output the size of the network, using a deterministic algorithm. Nodes have labels which are (not necessarily distinct) binary strings. The length of a labeling scheme is the largest length of a label. Our main result states that the minimum length of a labeling scheme permitting size discovery in the class of networks of maximum degree Delta is Theta( Delta). | Algorithmic problems in radio networks modeled as graphs were studied for such distributed tasks as broadcasting @cite_3 @cite_5 , gossiping @cite_3 @cite_16 and leader election @cite_15 @cite_11 . In some cases @cite_3 @cite_16 , the model without collision detection was used, in others @cite_19 @cite_11 , the collision detection capability was assumed. | {
"cite_N": [
"@cite_3",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2134516390",
"2048113794",
"2048454711",
"2951342393",
"2127694687",
"2003002196"
],
"abstract": [
"We establish an O(nlog2n) upper bound on the time for deterministic distributed broadcasting in multi-hop radio networks with unknown topology. This nearly matches the known lower bound of Ω(n log n). The fastest previously known algorithm for this problem works in time O(n3 2). Using our broadcasting algorithm, we develop an O(n3 2log2n) algorithm for gossiping in the same network model.",
"We present a randomized distributed algorithm that in radio networks with collision detection broadcasts a single message in O(D + log6 n) rounds, with high probability. This time complexity is most interesting because of its optimal additive dependence on the network diameter D. It improves over the currently best known O(Dlogn D + log2 n) algorithms, due to Czumaj and Rytter [FOCS 2003], and Kowalski and Pelc [PODC 2003]. These algorithms where designed for the model without collision detection and are optimal in that model. However, as explicitly stated by Peleg in his 2007 survey on broadcast in radio networks, it had remained an open question whether the bound can be improved with collision detection. We also study distributed algorithms for broadcasting k messages from a single source to all nodes. This problem is a natural and important generalization of the single-message broadcast problem, but is in fact considerably more challenging and less understood. We show the following results: If the network topology is known to all nodes, then a k-message broadcast can be performed in O(D + klog n + log2 n) rounds, with high probability. If the topology is not known, but collision detection is available, then a k-message broadcast can be performed in O(D + klog n + log6 n) rounds, with high probability. The first bound is optimal and the second is optimal modulo the additive O(log6 n) term.",
"This paper concerns the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in radio networks with known topology, i.e., where for each primitive the schedule of transmissions is precomputed based on full knowledge about the size and the topology of the network.The first part of the paper examines the two communication primitives in general graphs. In particular, it proposes a new (efficiently computable) deterministic schedule that uses O(D+Δ log n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ. Our new schedule improves and simplifies the currently best known gossiping schedule, requiring time O(D+√[i+2]DΔ logi+1 n), for any network with the diameter D=Ω(logi+4n), where i is an arbitrary integer constant i ≥ 0, see [17]. For the broadcast task we deliver two new results: a deterministic efficient algorithm for computing a radio schedule of length D+O(log3 n), and a randomized algorithm for computing a radio schedule of length D+O(log2 n). These results improve on the best currently known D+O(log4 n) time schedule due to Elkin and Kortsarz [12].The second part of the paper focuses on radio communication in planar graphs, devising a new broadcasting schedule using fewer than 3D time slots. This result improves, for small values of D, on currently best known D+O(log3n) time schedule proposed by Elkin and Kortsarz in [12]. Our new algorithm should be also seen as the separation result between the planar and the general graphs with a small diameter due to the polylogarithmic inapproximability result in general graphs due to Elkin and Kortsarz, see [11].",
"We study two fundamental communication primitives: broadcasting and leader election in the classical model of multi-hop radio networks with unknown topology and without collision detection mechanisms. It has been known for almost 20 years that in undirected networks with n nodes and diameter D, randomized broadcasting requires Omega(D log n D + log^2 n) rounds in expectation, assuming that uninformed nodes are not allowed to communicate (until they are informed). Only very recently, Haeupler and Wajc (PODC'2016) showed that this bound can be slightly improved for the model with spontaneous transmissions, providing an O(D log n loglog n log D + log^O(1) n)-time broadcasting algorithm. In this paper, we give a new and faster algorithm that completes broadcasting in O(D log n log D + log^O(1) n) time, with high probability. This yields the first optimal O(D)-time broadcasting algorithm whenever D is polynomial in n. Furthermore, our approach can be applied to design a new leader election algorithm that matches the performance of our broadcasting algorithm. Previously, all fast randomized leader election algorithms have been using broadcasting as their subroutine and their complexity have been asymptotically strictly bigger than the complexity of broadcasting. In particular, the fastest previously known randomized leader election algorithm of Ghaffari and Haeupler (SODA'2013) requires O(D log n D min loglog n, log n D + log^O(1) n)-time with high probability. Our new algorithm requires O(D log n log D + log^O(1) n) time with high probability, and it achieves the optimal O(D) time whenever D is polynomial in n.",
"We study deterministic gossiping in ad hoc radio networks with large node labels. The labels (identifiers) of the nodes come from a domain of size N which may be much larger than the size n of the network (the number of nodes). Most of the work on deterministic communication has been done for the model with small labels which assumes N = O(n). A notable exception is Peleg's paper, where the problem of deterministic communication in ad hoc radio networks with large labels is raised and a deterministic broadcasting algorithm is proposed, which runs in O(n2log n) time for N polynomially large in n. The O(nlog2n)-time deterministic broadcasting algorithm for networks with small labels given by implies deterministic O(n log N log n)-time broadcasting and O(n2log2N log n)-time gossiping in networks with large labels. We propose two new deterministic gossiping algorithms for ad hoc radio networks with large labels, which are the first such algorithms with subquadratic time for polynomially large N. More specifically, we propose: a deterministic O(n3 2log2N log n)-time gossiping algorithm for directed networks; and a deterministic O(n log2N log2n)-time gossiping algorithm for undirected networks.",
"Abstract We address the fundamental distributed problem of leader election in ad hoc radio networks modeled as undirected graphs. A signal from a transmitting node reaches all neighbors but a message is received successfully by a node, if and only if exactly one of its neighbors transmits in this round. If two neighbors of a node transmit simultaneously in a given round, we say that a collision occurred at this node. Collision detection is the ability of nodes to distinguish a collision from silence. We show that collision detection speeds up leader election in arbitrary radio networks. Our main result is a deterministic leader election algorithm working in time O ( n ) in all n -node networks, if collision detection is available, while it is known that deterministic leader election requires time Ω ( n log n ) , even for complete networks, if there is no collision detection."
]
} |
1704.08713 | 2610965652 | The number of nodes of a network, called its size, is one of the most important network parameters. A radio network is a collection of stations, called nodes, with wireless transmission and receiving capabilities. It is modeled as a simple connected undirected graph whose nodes communicate in synchronous rounds. In each round, a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. If @math listens in a round, and two or more neighbors of @math transmit in this round, a collision occurs at @math . Two scenarios are considered in the literature: if nodes can distinguish collision from silence (the latter occurs when no neighbor transmits), we say that the network has the collision detection capability, otherwise there is no collision detection. We consider the task of size discovery: finding the size of an unknown radio network with collision detection. All nodes have to output the size of the network, using a deterministic algorithm. Nodes have labels which are (not necessarily distinct) binary strings. The length of a labeling scheme is the largest length of a label. Our main result states that the minimum length of a labeling scheme permitting size discovery in the class of networks of maximum degree Delta is Theta( Delta). | In @cite_0 , the authors compared the minimum size of advice required to solve two information dissemination problems, using a linear number of messages. In @cite_17 , given a distributed representation of a solution for a problem, the authors investigated the number of bits of communication needed to verify the legality of the represented solution. In @cite_8 , the authors established the size of advice needed to break competitive ratio 2 of an exploration algorithm in trees. In @cite_9 , it was shown that advice of constant size permits to carry out the distributed construction of a minimum spanning tree in logarithmic time. In @cite_16 , short labeling schemes were constructed with the aim to answer queries about the distance between any pair of nodes. In @cite_10 , the advice paradigm was used for online problems. In the case of @cite_4 , the issue was not efficiency but feasibility: it was shown that @math is the minimum size of advice required to perform monotone connected graph clearing. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2174013141",
"2038319432",
"1975595616",
"1971694274",
"2127694687",
"2109659895",
"2056295140"
],
"abstract": [
"[L. Blin, P. Fraigniaud, N. Nisse, S. Vial, Distributing chasing of network intruders, in: 13th Colloquium on Structural Information and Communication Complexity, SIROCCO, in: LNCS, vol. 4056, Springer-Verlag, 2006, pp. 70-84] introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to clear a contaminated graph in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. Moreover, the clearing of the graph must be performed using the optimal number of searchers, i.e. the minimum number of searchers sufficient to clear the graph in a monotone connected way in a centralized setting. We show that the minimum number of bits of advice permitting the monotone connected and optimal clearing of a network in a distributed setting is @Q(nlogn), where n is the number of nodes of the network. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(nlogn) bits, and a protocol using this labelling that enables the optimal number of searchers to clear G in a monotone connected distributed way. Then, we show that this number of bits of advice is optimal: any distributed protocol requires @W(nlogn) bits of advice to clear a network in a monotone connected way, using an optimal number of searchers.",
"We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2.",
"We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant.",
"We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in an n-node network, is @Q(nlogn), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an efficient wakeup requires strictly more information about the network than an efficient broadcast.",
"We study deterministic gossiping in ad hoc radio networks with large node labels. The labels (identifiers) of the nodes come from a domain of size N which may be much larger than the size n of the network (the number of nodes). Most of the work on deterministic communication has been done for the model with small labels which assumes N = O(n). A notable exception is Peleg's paper, where the problem of deterministic communication in ad hoc radio networks with large labels is raised and a deterministic broadcasting algorithm is proposed, which runs in O(n2log n) time for N polynomially large in n. The O(nlog2n)-time deterministic broadcasting algorithm for networks with small labels given by implies deterministic O(n log N log n)-time broadcasting and O(n2log2N log n)-time gossiping in networks with large labels. We propose two new deterministic gossiping algorithms for ad hoc radio networks with large labels, which are the first such algorithms with subquadratic time for polynomially large N. More specifically, we propose: a deterministic O(n3 2log2N log n)-time gossiping algorithm for directed networks; and a deterministic O(n log2N log2n)-time gossiping algorithm for undirected networks.",
"We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk.",
"This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind."
]
} |
1704.08713 | 2610965652 | The number of nodes of a network, called its size, is one of the most important network parameters. A radio network is a collection of stations, called nodes, with wireless transmission and receiving capabilities. It is modeled as a simple connected undirected graph whose nodes communicate in synchronous rounds. In each round, a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. If @math listens in a round, and two or more neighbors of @math transmit in this round, a collision occurs at @math . Two scenarios are considered in the literature: if nodes can distinguish collision from silence (the latter occurs when no neighbor transmits), we say that the network has the collision detection capability, otherwise there is no collision detection. We consider the task of size discovery: finding the size of an unknown radio network with collision detection. All nodes have to output the size of the network, using a deterministic algorithm. Nodes have labels which are (not necessarily distinct) binary strings. The length of a labeling scheme is the largest length of a label. Our main result states that the minimum length of a labeling scheme permitting size discovery in the class of networks of maximum degree Delta is Theta( Delta). | There are two papers studying the size of advice in the context of radio networks. In @cite_21 , the authors studied radio networks for which it is possible to perform centralized broadcasting in constant time. They proved that @math bits of advice allow to obtain constant time in such networks, while @math bits are not enough. In @cite_12 , the authors considered the problem of topology recognition in wireless trees without collision detection. Similarly to the present paper, they investigated short labeling schemes permitting to accomplish this task. It should be noted that the results in @cite_12 and in the present paper are not comparable: @cite_12 studies a harder task (topology recognition) in a weaker model (no collision detection), but restricts attention only to trees, while the present paper studies an easier task (size discovery) in a stronger model (with collision detection) but our results hold for arbitrary networks. | {
"cite_N": [
"@cite_21",
"@cite_12"
],
"mid": [
"1983693678",
"2951371116"
],
"abstract": [
"We study deterministic broadcasting in radio networks in the recently introduced framework of network algorithms with advice. We concentrate on the problem of trade-offs between the number of bits of information (size of advice) available to nodes and the time in which broadcasting can be accomplished. In particular, we ask what is the minimum number of bits of information that must be available to nodes of the network, in order to broadcast very fast. For networks in which constant time broadcast is possible under a complete knowledge of the network we give a tight answer to the above question: O(n) bits of advice are sufficient but o(n) bits are not, in order to achieve constant broadcasting time in all these networks. This is in sharp contrast with geometric radio networks of constant broadcasting time: we show that in these networks a constant number of bits suffices to broadcast in constant time. For arbitrary radio networks we present a broadcasting algorithm whose time is inverse-proportional to the size of the advice.",
"We consider the problem of topology recognition in wireless (radio) networks modeled as undirected graphs. Topology recognition is a fundamental task in which every node of the network has to output a map of the underlying graph i.e., an isomorphic copy of it, and situate itself in this map. In wireless networks, nodes communicate in synchronous rounds. In each round a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. Nodes have labels which are (not necessarily different) binary strings. The length of a labeling scheme is the largest length of a label. We concentrate on wireless networks modeled by trees, and we investigate two problems. What is the shortest labeling scheme that permits topology recognition in all wireless tree networks of diameter @math and maximum degree @math ? What is the fastest topology recognition algorithm working for all wireless tree networks of diameter @math and maximum degree @math , using such a short labeling scheme? We are interested in deterministic topology recognition algorithms. For the first problem, we show that the minimum length of a labeling scheme allowing topology recognition in all trees of maximum degree @math is @math . For such short schemes, used by an algorithm working for the class of trees of diameter @math and maximum degree @math , we show almost matching bounds on the time of topology recognition: an upper bound @math , and a lower bound @math , for any constant @math ."
]
} |
1704.08628 | 2611536601 | Text line detection and localization is a crucial step for full page document analysis, but still suffers from heterogeneity of real life documents. In this paper, we present a new approach for full page text recognition. Localization of the text lines is based on regressions with Fully Convolutional Neural Networks and Multidimensional Long Short-Term Memory as contextual layers. In order to increase the efficiency of this localization method, only the position of the left side of the text lines are predicted. The text recognizer is then in charge of predicting the end of the text to recognize. This method has shown good results for full page text recognition on the highly heterogeneous Maurdor dataset. | Other methods follow a top-down approach and split the pages into smaller parts. The XY-cut algorithm @cite_19 looks for vertical and horizontal white spaces to successively split the pages in paragraphs, lines and words. Similarly, projection profile algorithms like @cite_7 are aimed at finding the horizontal whiter parts of a paragraph. This technique is extended to non-horizontal texts by methods like @cite_9 that dynamically finds a path between the text lines or by @cite_1 that use a Viterbi algorithm to minimize this path. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_1",
"@cite_7"
],
"mid": [
"8079469",
"2170192492",
"1997837700",
"1968983105"
],
"abstract": [
"",
"In this paper, we propose a novel technique to segment handwritten document images into text lines by shredding their surface with local minima tracers. Our approach is based on the topological assumption that for each text line, there exists a path from one side of the image to the other that traverses only one text line. We first blur the image and then use tracers to follow the white-most and black-most paths from left to right as well as from right to left in order to shred the image into text line areas. We experimentally tested the proposed methodology and got promising results comparable to state of the art text line segmentation techniques.",
"Abstract This paper presents a recognition-based character segmentation method for handwritten Chinese characters. Possible non-linear segmentation paths are initially located using a probabilistic Viterbi algorithm. Candidate segmentation paths are determined by verifying overlapping paths, between-character gaps, and adjacent-path distances. A segmentation graph is then constructed using candidate paths to represent nodes and two nodes with appropriate distances are connected by an arc. The cost in each arc is a function of character recognition distances, squareness of characters and internal gaps in characters. After the shortest path is detected from the segmentation graph, the nodes in the path represent optimal segmentation paths. In addition, 125 text-line images are collected from seven form documents. Cumulatively, these text-lines contain 1132 handwritten Chinese characters. The average segmentation rate in our experiments is 95.58 . Moreover, the probabilistic Viterbi algorithm is modified slightly to extract text-lines from document pages by obtaining non-linear paths while gaps between text-lines are not obvious. This algorithm can also be modified to segment characters from printed text-line images by adjusting parameters used to represent costs of arcs in the segmentation graph.",
"The multi-orientation occurs frequently in ancient handwritten documents, where the writers try to update a document by adding some annotations in the margins. Due to the margin narrowness, this gives rise to lines in different directions and orientations. Document recognition needs to find the lines everywhere they are written whatever their orientation. This is why we propose in this paper a new approach allowing us to extract the multi-oriented lines in scanned documents. Because of the multi-orientation of lines and their dispersion in the page, we use an image meshing allowing us to progressively and locally determine the lines. Once the meshing is established, the orientation is determined using the Wigner---Ville distribution on the projection histogram profile. This local orientation is then enlarged to limit the orientation in the neighborhood. Afterward, the text lines are extracted locally in each zone basing on the follow-up of the orientation lines and the proximity of connected components. Finally, the connected components that overlap and touch in adjacent lines are separated. The morphology analysis of the terminal letters of Arabic words is here considered. The proposed approach has been experimented on 100 documents reaching an accuracy of about 98.6 ."
]
} |
1704.08628 | 2611536601 | Text line detection and localization is a crucial step for full page document analysis, but still suffers from heterogeneity of real life documents. In this paper, we present a new approach for full page text recognition. Localization of the text lines is based on regressions with Fully Convolutional Neural Networks and Multidimensional Long Short-Term Memory as contextual layers. In order to increase the efficiency of this localization method, only the position of the left side of the text lines are predicted. The text recognizer is then in charge of predicting the end of the text to recognize. This method has shown good results for full page text recognition on the highly heterogeneous Maurdor dataset. | Techniques like the ones proposed by @cite_26 or @cite_10 classify pixels into text or non-text but need post-processing techniques to constitute text lines. | {
"cite_N": [
"@cite_26",
"@cite_10"
],
"mid": [
"2025882176",
"2126925189"
],
"abstract": [
"In the context of historical collection conservation and worldwide diffusion, this paper presents an automatic approach of historical book page layout segmentation. In this article, we propose to search the homogeneous regions from the content of historical digitized books with little a priori knowledge by extracting and analyzing texture features. The novelty of this work lies in the unsupervised clustering of the extracted texture descriptors to find homogeneous regions, i.e. graphic and textual regions, by performing the clustering approach on an entire book instead of processing each page individually. We propose firstly to characterize the content of an entire book by extracting the texture information of each page, as our goal is to compare and index the content of digitized books. The extraction of texture features, computed without any hypothesis on the document structure, is based on two non-parametric tools: the autocorrelation function and multiresolution analysis. Secondly, we perform an unsupervised clustering approach on the extracted features in order to classify automatically the homogeneous regions of book pages. The clustering results are assessed by internal and external accuracy measures. The overall results are quite satisfying. Such analysis would help to construct a computer-aided categorization tool of pages.",
"In this paper, we present an unsupervised feature learning method for page segmentation of historical handwritten documents available as color images. We consider page segmentation as a pixel labeling problem, i.e., each pixel is classified as either periphery, background, text block, or decoration. Traditional methods in this area rely on carefully hand-crafted features or large amounts of prior knowledge. In contrast, we apply convolutional autoencoders to learn features directly from pixel intensity values. Then, using these features to train an SVM, we achieve high quality segmentation without any assumption of specific topologies and shapes. Experiments on three public datasets demonstrate the effectiveness and superiority of the proposed approach."
]
} |
1704.08628 | 2611536601 | Text line detection and localization is a crucial step for full page document analysis, but still suffers from heterogeneity of real life documents. In this paper, we present a new approach for full page text recognition. Localization of the text lines is based on regressions with Fully Convolutional Neural Networks and Multidimensional Long Short-Term Memory as contextual layers. In order to increase the efficiency of this localization method, only the position of the left side of the text lines are predicted. The text recognizer is then in charge of predicting the end of the text to recognize. This method has shown good results for full page text recognition on the highly heterogeneous Maurdor dataset. | These techniques usually work well on the homogeneous datasets they have been tuned for but need heavy engineering to perform well on heterogeneous datasets like the Maurdor dataset @cite_2 . For this reason, Machine learning has proven to be efficient, in particular deep convolutional networks. Early work from @cite_4 classifies scene text image parts as text and non-text with a Convolutional Neural Network on a sliding window. In @cite_14 , paragraph images are split vertically using a recurrent neural network and CTC alignment. More recently, methods inspired from image object detection techniques like MultiBox @cite_12 , YOLO @cite_23 or Single-Shot Detector (SSD) @cite_8 have arisen. @cite_3 proposed a MultiBox based approach for direct text line bounding boxes detection. Similarly, @cite_5 and @cite_13 use respectively YOLO based and SSD based approach for scene text detection. @cite_28 also propose the separate detection of bottom-left and top-right corners of line bounding boxes. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2131801066",
"107408668",
"2193145675",
"",
"2550475090",
"",
"2069617593",
"2952302849",
"2550687635",
"2949150497"
],
"abstract": [
"The detection of text lines, as a first processing step, is critical in all text recognition systems. State-of-the-art methods to locate lines of text are based on handcrafted heuristics fine-tuned by the image processing community's experience. They succeed under certain constraints; for instance the background has to be roughly uniform. We propose to use more “agnostic” Machine Learning-based approaches to address text line location. The main motivation is to be able to process either damaged documents, or flows of documents with a high variety of layouts and other characteristics. A new method is presented in this work, inspired by the latest generation of optical models used for text recognition, namely Recurrent Neural Networks. As these models are sequential, a column of text lines in our application plays here the same role as a line of characters in more traditional text recognition settings. A key advantage of the proposed method over other data-driven approaches is that compiling a training dataset does not require labeling line boundaries: only the number of lines are required for each paragraph. Experimental results show that our approach gives similar or better results than traditional handcrafted approaches, with little engineering efforts and less hyper-parameter tuning.",
"Text detection is an important preliminary step before text can be recognized in unconstrained image environments. We present an approach based on convolutional neural networks to detect and localize horizontal text lines from raw color pixels. The network learns to extract and combine its own set of features through learning instead of using hand-crafted ones. Learning was also used in order to precisely localize the text lines by simply training the network to reject badly-cut text and without any use of tedious knowledge-based postprocessing. Although the network was trained with synthetic examples, experimental results demonstrated that it can outperform other methods on the real-world test set of ICDAR’03.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this work, we propose a new neural model which directly predicts bounding box coordinates. The particularity of our contribution lies in the local computations of predictions with a new form of local parameter sharing which keeps the overall amount of trainable parameters low. Key components of the model are spatial 2D-LSTM recurrent layers which convey contextual information between the regions of the image. We show that this model is more powerful than the state of the art in applications where training data is not as abundant as in the classical configuration of natural images and Imagenet Pascal VOC tasks. We particularly target the detection of text in document images, but our method is not limited to this setting. The proposed model also facilitates the detection of many objects in a single image and can deal with inputs of variable sizes without resizing.",
"",
"This paper presents the achievements of anexperimental project called Maurdor (Moyens AUtomatises deReconnaissance de Documents ecRits - Automatic Processingof Digital Documents) funded by the French DGA that aims atimproving processing technologies for handwritten and typewritten documents in French, English and Arabic. The first part describes the context and objectives of the project. The second part deals with the challenge of creating a realistic corpus of 10,000 annotated documents to support the efficient development and evaluation of processing modules. The third part presents the organisation, metric definition and results of the Maurdor International evaluation campaign. The last part presents the Maurdor demonstrator with a functional and technical perspective.",
"In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-to-end object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.",
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations."
]
} |
1704.08628 | 2611536601 | Text line detection and localization is a crucial step for full page document analysis, but still suffers from heterogeneity of real life documents. In this paper, we present a new approach for full page text recognition. Localization of the text lines is based on regressions with Fully Convolutional Neural Networks and Multidimensional Long Short-Term Memory as contextual layers. In order to increase the efficiency of this localization method, only the position of the left side of the text lines are predicted. The text recognizer is then in charge of predicting the end of the text to recognize. This method has shown good results for full page text recognition on the highly heterogeneous Maurdor dataset. | Finally, @cite_17 use a hard attention mechanism to directly perform full page text recognition without prior localization. The iterative algorithm finds the next attention point based on the sequence of seen glimpses modeled through the hidden state of a recurrent network. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2343436474"
],
"abstract": [
"Offline handwriting recognition systems require cropped text line images for both training and recognition. On the one hand, the annotation of position and transcript at line level is costly to obtain. On the other hand, automatic line segmentation algorithms are prone to errors, compromising the subsequent recognition. In this paper, we propose a modification of the popular and efficient multi-dimensional long short-term memory recurrent neural networks (MDLSTM-RNNs) to enable end-to-end processing of handwritten paragraphs. More particularly, we replace the collapse layer transforming the two-dimensional representation into a sequence of predictions by a recurrent version which can recognize one line at a time. In the proposed model, a neural network performs a kind of implicit line segmentation by computing attention weights on the image representation. The experiments on paragraphs of Rimes and IAM database yield results that are competitive with those of networks trained at line level, and constitute a significant step towards end-to-end transcription of full documents."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | Image segmentation can be directly extended to videos by considering the temporal pixel object correspondence. Most of the methods pay attention to how to build graph models @cite_42 @cite_22 @cite_3 . Approaches of @cite_4 @cite_37 @cite_36 @cite_5 @cite_72 @cite_17 @cite_71 @cite_44 @cite_16 @cite_67 introduced different schemes to estimate class object distributions. The geodesic distance @cite_11 was used to model the pixel relation more accurately. To efficiently solve graph based models, the bilateral space is applied in @cite_28 . Energy or feature propagation schemes were also presented in @cite_48 @cite_0 . To reduce user interaction, Nagaraja al @cite_51 proposed a framework that only needs a few strokes and Lee al @cite_69 found key-segments automatically. | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_42",
"@cite_3",
"@cite_44",
"@cite_72",
"@cite_71",
"@cite_5",
"@cite_67",
"@cite_69",
"@cite_4",
"@cite_48",
"@cite_17",
"@cite_37",
"@cite_28",
"@cite_16",
"@cite_0",
"@cite_51",
"@cite_11"
],
"mid": [
"2114316746",
"1943880421",
"2125107413",
"2218234108",
"2022956861",
"2156252543",
"2952323566",
"2126496262",
"2212077366",
"1989348325",
"",
"",
"2952537394",
"",
"2463175074",
"2155598147",
"",
"2200599981",
"1967268147"
],
"abstract": [
"In this work, video segmentation is viewed as an efficient intra-frame grouping temporally reinforced by a strong inter-frame coherence. Traditional approaches simply regard pixel motions as another prior in the MRF-MAP framework. Since pixel pre-grouping is inefficiently performed on every frame, the strong correlation between inter-frame groupings is largely underutilized. We exploit the inter-frame correlation to propagate trustworthy groupings from the previous frame. A preceding graph is constructed and labeled for the previous frame. It is temporally propagated to the current frame and validated by similarity measures. All unlabeled subgraphs are spatially aggregated for the final grouping. Experimental results show that the proposed approach is highly efficient for spatio-temporal segmentation. It makes good use of temporal correlation and produces satisfactory grouping results.",
"In this paper we present an approach for segmenting objects in videos taken in complex scenes with multiple and different targets. The method does not make any specific assumptions about the videos and relies on how objects are perceived by humans according to Gestalt laws. Initially, we rapidly generate a coarse foreground segmentation, which provides predictions about motion regions by analyzing how superpixel segmentation changes in consecutive frames. We then exploit these location priors to refine the initial segmentation by optimizing an energy function based on appearance and perceptual organization, only on regions where motion is observed. We evaluated our method on complex and challenging video sequences and it showed significant performance improvements over recent state-of-the-art methods, being also fast enough to be used for “on-the-fly” processing.",
"This paper proposes a unified framework for spatiotemporal segmentation of video sequences. A Bayesian network is presented to model the interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notions of distance transformation and Markov random field are used to express spatiotemporal constraints. Given consecutive frames, an optimization method is proposed to maximize the conditional probability density of the three fields in an iterative way. Experimental results show that the approach is robust and generates spatiotemporally coherent segmentation results.",
"Video segmentation is the task of grouping similar pixels in the spatio-temporal domain, and has become an important preprocessing step for subsequent video analysis. Most video segmentation and supervoxel methods output a hierarchy of segmentations, but while this provides useful multiscale information, it also adds difficulty in selecting the appropriate level for a task. In this work, we propose an efficient and robust video segmentation framework based on parametric graph partitioning (PGP), a fast, almost parameter free graph partitioning method that identifies and removes between-cluster edges to form node clusters. Apart from its computational efficiency, PGP performs clustering of the spatio-temporal volume without requiring a pre-specified cluster number or bandwidth parameters, thus making video segmentation more practical to use in applications. The PGP framework also allows processing sub-volumes, which further improves performance, contrary to other streaming video segmentation methods where sub-volume processing reduces performance. We evaluate the PGP method using the SegTrack v2 and Chen Xiph.org datasets, and show that it outperforms related state-of-the-art algorithms in 3D segmentation metrics and running time.",
"Online foreground extraction is very difficult due to the complexity of real scenes. Almost all the previous methods assume that the background is stationary, which not only incur unreliable result due to background activities like dynamic shadow, moving background objects etc., but also makes them hard to be extended to the case of non-stationary background. In this paper we assume that the background is continuous instead of stationary, and present a transductive video segmentation method that can handle dynamic scenes captured by a hand-held moving camera. The segmentation is propagated based on local color models and temporal prior, as well as a dynamic global color model (DGKDE) in the case of occlusion. A novel local color modeling method, FLKDE, is proposed to model both local color distribution and temporal prior at each pixel. FLKDE can be learned additively to reach real-time speed. Finally, a very fast geodesic-based method is adopted to solve for the segmentation. Experiments show that our method can generate good quality segmentation for wide variety of scenes, and can reach 15∼25 fps for 640×480 size of input image sequences.",
"In this paper, we address the problem of video object segmentation, which is to automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. We propose a novel algorithm to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods.",
"Numerous approaches in image processing and computer vision are making use of super-pixels as a pre-processing step. Among the different methods producing such over-segmentation of an image, the graph-based approach of Felzenszwalb and Huttenlocher is broadly employed. One of its interesting properties is that the regions are computed in a greedy manner in quasi-linear time. The algorithm may be trivially extended to video segmentation by considering a video as a 3D volume, however, this can not be the case for causal segmentation, when subsequent frames are unknown. We propose an efficient video segmentation approach that computes temporally consistent pixels in a causal manner, filling the need for causal and real time applications.",
"In this paper we address the problem of fast segmenting moving objects in video acquired by moving camera or more generally with a moving background. We present an approach based on a color segmentation followed by a region-merging on motion through Markov random fields (MRFs). The technique we propose is inspired by the work of Gelgon and Bouthemy (2000), that has been modified to reduce computational cost in order to achieve a fast segmentation (about ten frame per second). To this aim a modified region matching algorithm (namely partitioned region matching) and an innovative arc-based MRF optimization algorithm with a suitable definition of the motion reliability are proposed. Results on both synthetic and real sequences are reported to confirm validity of our solution.",
"We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.",
"",
"",
"Video segmentation is a stepping stone to understanding video context. Video segmentation enables one to represent a video by decomposing it into coherent regions which comprise whole or parts of objects. However, the challenge originates from the fact that most of the video segmentation algorithms are based on unsupervised learning due to expensive cost of pixelwise video annotation and intra-class variability within similar unconstrained video classes. We propose a Markov Random Field model for unconstrained video segmentation that relies on tight integration of multiple cues: vertices are defined from contour based superpixels, unary potentials from temporal smooth label likelihood and pairwise potentials from global structure of a video. Multi-cue structure is a breakthrough to extracting coherent object regions for unconstrained videos in absence of supervision. Our experiments on VSB100 dataset show that the proposed model significantly outperforms competing state-of-the-art algorithms. Qualitative analysis illustrates that video segmentation result of the proposed model is consistent with human perception of objects.",
"",
"In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"In this paper, we propose a novel approach to extract primary object segments in videos in the object proposal' domain. The extracted primary object regions are then used to build object models for optimized video segmentation. The proposed approach has several contributions: First, a novel layered Directed Acyclic Graph (DAG) based framework is presented for detection and segmentation of the primary object in video. We exploit the fact that, in general, objects are spatially cohesive and characterized by locally smooth motion trajectories, to extract the primary object from the set of all available proposals based on motion, appearance and predicted-shape similarity across frames. Second, the DAG is initialized with an enhanced object proposal set where motion based proposal predictions (from adjacent frames) are used to expand the set of object proposals for a particular frame. Last, the paper presents a motion scoring function for selection of object proposals that emphasizes high optical flow gradients at proposal boundaries to discriminate between moving objects and the background. The proposed approach is evaluated using several challenging benchmark videos and it outperforms both unsupervised and supervised state-of-the-art methods.",
"",
"As the use of videos is becoming more popular in computer vision, the need for annotated video datasets increases. Such datasets are required either as training data or simply as ground truth for benchmark datasets. A particular challenge in video segmentation is due to disocclusions, which hamper frame-to-frame propagation, in conjunction with non-moving objects. We show that a combination of motion from point trajectories, as known from motion segmentation, along with minimal supervision can largely help solve this problem. Moreover, we integrate a new constraint that enforces consistency of the color distribution in successive frames. We quantify user interaction effort with respect to segmentation quality on challenging ego motion videos. We compare our approach to a diverse set of algorithms in terms of user effort and in terms of performance on common video segmentation benchmarks.",
"An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | Temporal coherence is another important issue in video segmentation. Optical flow @cite_23 @cite_63 , object trajectory tracking @cite_8 @cite_56 , parametric contour @cite_19 , long short term analysis @cite_57 @cite_4 , are applied to address the temporal coherence issue. Many previous methods handle bilayer segmentation @cite_6 . Tree-based classifier was presented in @cite_65 and locally competing SVMs were designed in @cite_49 for better bilayer segmentation. To evaluate video segmentation quality, benchmarks @cite_77 @cite_70 were proposed. Compared with these graph based methods, our method is real-time and without any interaction. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_70",
"@cite_65",
"@cite_6",
"@cite_56",
"@cite_57",
"@cite_19",
"@cite_77",
"@cite_63",
"@cite_23",
"@cite_49"
],
"mid": [
"",
"2167331599",
"2126164636",
"",
"2125480112",
"2163747463",
"2076756823",
"2469578774",
"2470139095",
"2149022377",
"",
"1965190654"
],
"abstract": [
"",
"Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.",
"Video segmentation research is currently limited by the lack of a benchmark dataset that covers the large variety of sub problems appearing in video segmentation and that is large enough to avoid over fitting. Consequently, there is little analysis of video segmentation which generalizes across subtasks, and it is not yet clear which and how video segmentation should leverage the information from the still-frames, as previously studied in image segmentation, alongside video specific information, such as temporal volume, motion and occlusion. In this work we provide such an analysis based on annotations of a large video dataset, where each video is manually segmented by multiple persons. Moreover, we introduce a new volume-based metric that includes the important aspect of temporal consistency, that can deal with segmentation hierarchies, and that reflects the tradeoff between over-segmentation and segmentation accuracy.",
"",
"This paper presents an algorithm capable of real-time separation of foreground from background in monocular video sequences. Automatic segmentation of layers from colour contrast or from motion alone is known to be error-prone. Here motion, colour and contrast cues are probabilistically fused together with spatial and temporal priors to infer layers accurately and efficiently. Central to our algorithm is the fact that pixel velocities are not needed, thus removing the need for optical flow estimation, with its tendency to error and computational expense. Instead, an efficient motion vs nonmotion classifier is trained to operate directly and jointly on intensity-change and contrast. Its output is then fused with colour information. The prior on segmentation is represented by a second order, temporal, Hidden Markov Model, together with a spatial MRF favouring coherence except where contrast is high. Finally, accurate layer segmentation and explicit occlusion detection are efficiently achieved by binary graph cut. The segmentation accuracy of the proposed algorithm is quantitatively evaluated with respect to existing groundtruth data and found to be comparable to the accuracy of a state of the art stereo segmentation algorithm. Foreground background segmentation is demonstrated in the application of live background substitution and shown to generate convincingly good quality composite video.",
"This paper presents an approach to unsupervised segmentation of moving and static objects occurring in a video. Objects are, in general, spatially cohesive and characterized by locally smooth motion trajectories. Therefore, they occupy regions within each frame, while the shape and location of these regions vary slowly from frame to frame. Thus, video segmentation can be done by tracking regions across the frames such that the resulting tracks are locally smooth. To this end, we use a low-level segmentation to extract regions in all frames, and then we transitively match and cluster the similar regions across the video. The similarity is defined with respect to the region photometric, geometric, and motion properties. We formulate a new circular dynamic-time warping (CDTW) algorithm that generalizes DTW to match closed boundaries of two regions, without compromising DTW's guarantees of achieving the optimal solution with linear complexity. Our quantitative evaluation and comparison with the state of the art suggest that the proposed approach is a competitive alternative to currently prevailing point-based methods.",
"Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.",
"Interactive video segmentation systems aim at producing sub-pixel-level object boundaries for visual effect applications. Recent approaches mainly focus on using sparse user input (i.e. scribbles) for efficient segmentation, however, the quality of the final object boundaries is not satisfactory for the following reasons: (1) the boundary on each frame is often not accurate, (2) boundaries across adjacent frames wiggle around inconsistently, causing temporal flickering, and (3) there is a lack of direct user control for fine tuning. We propose Coherent Parametric Contours, a novel video segmentation propagation framework that addresses all the above issues. Our approach directly models the object boundary using a set of parametric curves, providing direct user controls for manual adjustment. A spatiotemporal optimization algorithm is employed to produce object boundaries that are spatially accurate and temporally stable. We show that existing evaluation datasets are limited and demonstrate a new set to cover the common cases in professional rotoscoping. A new metric for evaluating temporal consistency is proposed. Results show that our approach generates higher quality, more coherent segmentation results than previous methods.",
"Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.",
"In extended video sequences, individual frames are grouped into shots which are defined as a sequence taken by a single camera, and related shots are grouped into scenes which are defined as a single dramatic event taken by a small number of related cameras. This hierarchical structure is deliberately constructed, dictated by the limitations and preferences of the human visual and memory systems. We present three novel high-level segmentation results derived from these considerations, some of which are analogous to those involved in the perception of the structure of music. First and primarily, we derive and demonstrate a method for measuring probable scene boundaries, by calculating a short term memory-based model of shot-to-shot \"coherence\". The detection of local minima in this continuous measure permits robust and flexible segmentation of the video into scenes, without the necessity for first aggregating shots into clusters. Second, and independently of the first, we then derive and demonstrate a one-pass on-the-fly shot clustering algorithm. Third, we demonstrate partially successful results on the application of these two new methods to the next higher, \"theme\", level of video structure.",
"",
"The objective of foreground segmentation is to extract the desired foreground object from input videos. Over the years there have been significant amount of efforts on this topic, nevertheless there still lacks a simple yet effective algorithm that can process live videos of objects with fuzzy boundaries captured by freely moving cameras. This paper presents an algorithm toward this goal. The key idea is to train and maintain two competing one-class support vector machines (1SVMs) at each pixel location, which model local color distributions for foreground and background, respectively. We advocate the usage of two competing local classifiers, as it provides higher discriminative power and allows better handling of ambiguities. As a result, our algorithm can deal with a variety of videos with complex backgrounds and freely moving cameras with minimum user interactions. In addition, by introducing novel acceleration techniques and by exploiting the parallel structure of the algorithm, realtime processing speed is achieved for VGA-sized videos."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | Previous work focus in part on learning feature for video segmentation. Price al @cite_31 learned multiple cues and integrated them into an interactive segmentation system. Tripathi al proposed learning early- and mid-level features to improve performance. To handle training data shortage, weakly-supervised and unsupervised learning frameworks were developed in @cite_13 , @cite_40 and @cite_75 respectively. An one-shot learning method was proposed in @cite_25 only needing one example for learning. Drayer al proposed a unified framework including object detection, tracking and motion segmentation for object-level segmentation. To reduce errors during propagation, Wang al @cite_55 developed segmentation rectification via structured learning. | {
"cite_N": [
"@cite_75",
"@cite_55",
"@cite_40",
"@cite_31",
"@cite_13",
"@cite_25"
],
"mid": [
"2567381148",
"2117435890",
"1920142129",
"",
"2415731916",
"2552531400"
],
"abstract": [
"Video object segmentation is challenging due to the factors like rapidly fast motion, cluttered backgrounds, arbitrary object appearance variation and shape deformation. Most existing methods only explore appearance information between two consecutive frames, which do not make full use of the usefully long-term nonlocal information that is helpful to make the learned appearance stable, and hence they tend to fail when the targets suffer from large viewpoint changes and significant non-rigid deformations. In this paper, we propose a simple yet effective approach to mine the long-term sptatio-temporally nonlocal appearance information for unsupervised video segmentation. The motivation of our algorithm comes from the spatio-temporal nonlocality of the region appearance reoccurrence in a video. Specifically, we first generate a set of superpixels to represent the foreground and background, and then update the appearance of each superpixel with its long-term sptatio-temporally nonlocal counterparts generated by the approximate nearest neighbor search method with the efficient KD-tree algorithm. Then, with the updated appearances, we formulate a spatio-temporal graphical model comprised of the superpixel label consistency potentials. Finally, we generate the segmentation by optimizing the graphical model via iteratively updating the appearance model and estimating the labels. Extensive evaluations on the SegTrack and Youtube-Objects datasets demonstrate the effectiveness of the proposed method, which performs favorably against some state-of-art methods.",
"We present an interactive system for efficiently extracting foreground objects from a video. We extend previous min-cut based image segmentation techniques to the domain of video with four new contributions. We provide a novel painting-based user interface that allows users to easily indicate the foreground object across space and time. We introduce a hierarchical mean-shift preprocess in order to minimize the number of nodes that min-cut must operate on. Within the min-cut we also define new local cost functions to augment the global costs defined in earlier work. Finally, we extend 2D alpha matting methods designed for images to work with 3D video volumes. We demonstrate that our matting approach preserves smoothness across both space and time. Our interactive video cutout system allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.",
"Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"",
"Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labelled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pre-trained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modelling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.",
"This paper tackles the task of semi-supervised video object segmentation, i.e., the separation of an object from the background in a video, given the mask of the first frame. We present One-Shot Video Object Segmentation (OSVOS), based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one-shot). Although all frames are processed independently, the results are temporally coherent and stable. We perform experiments on two annotated video segmentation databases, which show that OSVOS is fast and improves the state of the art by a significant margin (79.8 vs 68.0 )."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | In recent years, CNNs have achieved great success in semantic image segmentation. Representative work exploited CNNs in two ways. The first is to learn important features and then apply classification to infer pixel labels @cite_10 @cite_45 @cite_26 . The second way is to directly learn the model from images. Long al @cite_30 introduced fully convolutional networks. Following it, DeepLab @cite_34 and CRFasRNN @cite_9 were developed using CRF for label map refinement. Recent PSPNet @cite_54 is based on ResNet @cite_64 , which performs decently. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_64",
"@cite_9",
"@cite_54",
"@cite_45",
"@cite_34",
"@cite_10"
],
"mid": [
"2952632681",
"2022508996",
"2949650786",
"",
"2952596663",
"1938976761",
"2964288706",
"2115150266"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.",
"Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"We address the problem of segmenting and recognizing objects in real world images, focusing on challenging articulated categories such as humans and other animals. For this purpose, we propose a novel design for region-based object detectors that integrates efficiently top-down information from scanning-windows part models and global appearance cues. Our detectors produce class-specific scores for bottom-up regions, and then aggregate the votes of multiple overlapping candidates through pixel classification. We evaluate our approach on the PASCAL segmentation challenge, and report competitive performance with respect to current leading techniques. On VOC2010, our method obtains the best results in 6 20 categories and the highest performance on articulated objects."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | These frameworks can be directly applied to videos in a frame-by-frame fashion. To additionally deal with temporal coherence, spatial-temporal FCN @cite_73 and recurrent FCN @cite_14 @cite_59 @cite_53 were proposed. Shelhamer al @cite_18 proposed Clockwork Convnets driven by fixed or adaptive clock signals that schedule processing of different layers. To use the temporal information, Khoreva al @cite_21 predicted per-frame segmentation guided by the output of previous frameworks. These approaches aim at general object segmentation. They have difficulty to achieve real-time performance for good quality portrait video segmentation. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_53",
"@cite_21",
"@cite_59",
"@cite_73"
],
"mid": [
"2963866581",
"2417002429",
"2561585794",
"2564998703",
"",
"2517503862"
],
"abstract": [
"Recent years have seen tremendous progress in still-image segmentation; however the naive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: (1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and (2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of “clockwork” convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets.",
"Image segmentation is an important step in most visual tasks. While convolutional neural networks have shown to perform well on single image segmentation, to our knowledge, no study has been been done on leveraging recurrent gated architectures for video segmentation. Accordingly, we propose a novel method for online segmentation of video sequences that incorporates temporal data. The network is built from fully convolutional element and recurrent unit that works on a sliding window over the temporal data. We also introduce a novel convolutional gated recurrent unit that preserves the spatial information and reduces the parameters learned. Our method has the advantage that it can work in an online fashion instead of operating over the whole input batch of video frames. The network is tested on the change detection dataset, and proved to have 5.5 improvement in F-measure over a plain fully convolutional network for per frame segmentation. It was also shown to have improvement of 1.4 for the F-measure compared to our baseline network that we call FCN 12s.",
"Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology for video segmentation that is capable of leveraging the information present in unlabeled data, besides sparsely labeled frames, in order to improve semantic estimates. Our model combines a convolutional architecture and a spatiotemporal transformer recurrent layer that is able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recognition and the gated temporal propagation modules can be trained jointly, end-to-end. The temporal, gated recurrent flow propagation component of our model can be plugged into any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our experiments in the challenging CityScapes and Camvid datasets, and for multiple deep architectures, indicate that the resulting model can leverage unlabeled temporal frames, next to a labeled one, in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost and with little extra computation.",
"Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"",
"This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets."
]
} |
1704.08812 | 2611600397 | We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications. | Similar to image matting, video matting computes the alpha matte in each frame. A survey of matting techniques can be found in @cite_2 and an evaluation benchmark is explained in @cite_38 . Most video matting methods extend the image one by adding temporal consistency. Representative schemes are those of @cite_46 @cite_62 @cite_76 @cite_27 @cite_11 @cite_68 @cite_32 . Since the matting approaches need user specified trimaps, methods of @cite_66 @cite_35 applied segmentation to improve trimap quality. Our method automatically achieves portrait segmentation and generates trimaps for further video matting. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_62",
"@cite_32",
"@cite_68",
"@cite_27",
"@cite_2",
"@cite_46",
"@cite_76",
"@cite_66",
"@cite_11"
],
"mid": [
"2334997961",
"2156406314",
"2152615368",
"2244837655",
"2126369166",
"106417640",
"2157887643",
"2213889652",
"1973919408",
"",
"1967268147"
],
"abstract": [
"",
"The objective of foreground segmentation is to extract the desired foreground object from input videos. Over the years, there have been significant amount of efforts on this topic. Nevertheless, there still lacks a simple yet effective algorithm that can process live videos of objects with fuzzy boundaries (e.g., hair) captured by freely moving cameras. This paper presents an algorithm toward this goal. The key idea is to train and maintain two competing one-class support vector machines at each pixel location, which model local color distributions for both foreground and background, respectively. The usage of two competing local classifiers, as we have advocated, provides higher discriminative power while allowing better handling of ambiguities. By exploiting this proposed machine learning technique, and by addressing both foreground segmentation and boundary matting problems in an integrated manner, our algorithm is shown to be particularly competent at processing a wide range of videos with complex backgrounds from freely moving cameras. This is usually achieved with minimum user interactions. Furthermore, by introducing novel acceleration techniques and by exploiting the parallel structure of the algorithm, near real-time processing speed (14 frames s without matting and 8 frames s with matting on a midrange PC & GPU) is achieved for VGA-sized videos.",
"Image and video matting are still challenging problems in areas with low foreground-background contrast. Video matting also has the challenge of ensuring temporally coherent mattes because the human visual system is highly sensitive to temporal jitter and flickering. On the other hand, video provides the opportunity to use information from other frames to improve the matte accuracy on a given frame. In this paper, we present a new video matting approach that improves the temporal coherence while maintaining high spatial accuracy in the computed mattes. We build sample sets of temporal and local samples that cover all the color distributions of the object and background over all previous frames. This helps guarantee spatial accuracy and temporal coherence by ensuring that proper samples are found even when distantly located in space or time. An explicit energy term encourages temporal consistency in the mattes derived from the selected samples. In addition, we use localized texture features to improve spatial accuracy in low contrast regions where color distributions overlap. The proposed method results in better spatial accuracy and temporal coherence than existing video matting methods.",
"This paper describes a new framework for video matting, the process of pulling a high-quality alpha matte and foreground from a video sequence. The framework builds upon techniques in natural image matting, optical flow computation, and background estimation. User interaction is comprised of garbage matte specification if background estimation is needed, and hand-drawn keyframe segmentations into \"foreground,\" \"background\" and \"unknown\". The segmentations, called trimaps, are interpolated across the video volume using forward and backward optical flow. Competing flow estimates are combined based on information about where flow is likely to be accurate. A Bayesian matting technique uses the flowed trimaps to yield high-quality mattes of moving foreground elements with complex boundaries filmed by a moving camera. A novel technique for smoke matte extraction is also demonstrated.",
"Video matting, or layer extraction, is a classic inverse problem in computer vision that involves the extraction of foreground objects, and the alpha mattes that describe their opacity, from a set of images. Modem approaches that work with natural backgrounds often require user-labelled \"trimaps\" that segment each image into foreground, background and unknown regions. For long sequences, the production of accurate trimaps can be time consuming. In contrast, another class of approach depends on automatic background extraction to automate the process, but existing techniques do not make use of spatiotemporal consistency, and cannot take account of operator hints such as trimaps. This paper presents a method inspired by natural image statistics that cleanly unifies these approaches. A prior is learnt that models the relationship between the spatiotemporal gradients in the image sequence and those in the alpha mattes. This is used in combination with a learnt foreground colour model and a prior on the alpha distribution to help regularize the solution and greatly improve the automatic performance of such systems. The system is applied to several real image sequences that demonstrate the advantage that the unified approach can afford.",
"We present an algorithm for extracting high quality temporally coherent alpha mattes of objects from a video. Our approach extends the conventional image matting approach, i.e. closed-form matting, to video by using multi-frame nonlocal matting Laplacian. Our multi-frame nonlocal matting Laplacian is defined over a nonlocal neighborhood in spatial temporal domain, and it solves the alpha mattes of several video frames all together simultaneously. To speed up computation and to reduce memory requirement for solving the multi-frame nonlocal matting Laplacian, we use the approximate nearest neighbor(ANN) to find the nonlocal neighborhood and the k-d tree implementation to divide the nonlocal matting Laplacian into several smaller linear systems. Finally, we adopt the nonlocal mean regularization to enhance temporal coherence of the estimated alpha mattes and to correct alpha matte errors at low contrast regions. We demonstrate the effectiveness of our approach on various examples with qualitative comparisons to the results from previous matting algorithms.",
"Matting refers to the problem of accurate foreground estimation in images and video. It is one of the key techniques in many image editing and film production applications, thus has been extensively studied in the literature. With the recent advances of digital cameras, using matting techniques to create novel composites or facilitate other editing tasks has gained increasing interest from both professionals as well as consumers. Consequently, various matting techniques and systems have been proposed to try to efficiently extract high quality mattes from both still images and video sequences. This survey provides a comprehensive review of existing image and video matting algorithms and systems, with an emphasis on the advanced techniques that have been recently proposed. The first part of the survey is focused on image matting. The fundamental techniques shared by many image matting algorithms, such as color sampling methods and matting affinities, are first analyzed. Image matting techniques are then classified into three categories based on their underlying methodologies, and an objective evaluation is conducted to reveal the advantages and disadvantages of each category. A unique Accuracy vs. Cost analysis is presented as a practical guidance for readers to properly choose matting tools that best fit their specific requirements and constraints. The second part of the survey is focused on video matting. The difficulties and challenges of video matting are first analyzed, and various ways of combining matting algorithms with other video processing techniques for building efficient video matting systems are reviewed. Key contributions, advantages as well as limitations of important systems are summarized. Finally, special matting systems that rely on capturing additional foreground background information to automate the matting process are discussed. A few interesting directions for future matting research are presented in the conclusion.",
"We introduce a novel method of video matting via sparse and low-rank representation. Previous matting methods [10, 9] introduced a nonlocal prior to estimate the alpha matte and have achieved impressive results on some data. However, on one hand, searching inadequate or excessive samples may miss good samples or introduce noise, on the other hand, it is difficult to construct consistent nonlocal structures for pixels with similar features, yielding spatially and temporally inconsistent video mattes. In this paper, we proposed a novel video matting method to achieve spatially and temporally consistent matting result. Toward this end, a sparse and low-rank representation model is introduced to pursue consistent nonlocal structures for pixels with similar features. The sparse representation is used to adaptively select best samples and accurately construct the nonlocal structures for all pixels, while the low-rank representation is used to globally ensure consistent nonlocal structures for pixels with similar features. The two representations are combined to generate consistent video mattes. Experimental results show that our method has achieved high quality results in a variety of challenging examples featuring illumination changes, feature ambiguity, topology changes, transparency variation, dis-occlusion, fast motion and motion blur.",
"This paper demonstrates how the nonlocal principle benefits video matting via the KNN Laplacian, which comes with a straightforward implementation using motion-aware K nearest neighbors. In hindsight, the fundamental problem to solve in video matting is to produce spatio-temporally coherent clusters of moving foreground pixels. When used as described, the motion-aware KNN Laplacian is effective in addressing this fundamental problem, as demonstrated by sparse user markups typically on only one frame in a variety of challenging examples featuring ambiguous foreground and background colors, changing topologies with disocclusion, significant illumination changes, fast motion, and motion blur. When working with existing Laplacian-based systems, we expect our Laplacian can benefit them immediately with an improved clustering of moving foreground pixels.",
"",
"An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature."
]
} |
1704.08834 | 2610180281 | When creating digital art, coloring and shading are often time consuming tasks that follow the same general patterns. A solution to automatically colorize raw line art would have many practical applications. We propose a setup utilizing two networks in tandem: a color prediction network based only on outlines, and a shading network conditioned on both outlines and a color scheme. We present processing methods to limit information passed in the color scheme, improving generalization. Finally, we demonstrate natural-looking results when colorizing outlines from scratch, as well as from a messy, user-defined color scheme. | There have been many advances in the field of image to image transformation. Mirza et. al @cite_8 introduced a conditional variety of the generative adversarial network, where the data conditioned on is given to both the generator and the discriminator. The method achieved solid results on the tasks of generating MNIST digits, and tagging photos from the MIR Flickr 25,000 dataset. Isola et. al @cite_9 present an outline for mapping images to one another, by optimizing pixel loss as well as an adversarial loss function. They treat the discriminator similarly to a convolutional layer, allowing it to see patches as it is slid across the generated image. | {
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"2552465644",
"2125389028"
],
"abstract": [
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels."
]
} |
1704.08834 | 2610180281 | When creating digital art, coloring and shading are often time consuming tasks that follow the same general patterns. A solution to automatically colorize raw line art would have many practical applications. We propose a setup utilizing two networks in tandem: a color prediction network based only on outlines, and a shading network conditioned on both outlines and a color scheme. We present processing methods to limit information passed in the color scheme, improving generalization. Finally, we demonstrate natural-looking results when colorizing outlines from scratch, as well as from a messy, user-defined color scheme. | Finally, there has been research into utilizing multiple networks in conjunction. StackGAN @cite_1 proposes a setup involving two networks working in coordination, for the task of text-to-image synthesis. One network generates low-resolution images from captions, while another increases resolution and adds fine details. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2564591810"
],
"abstract": [
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions."
]
} |
1704.08960 | 2611787829 | Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks. | Work on word segmentation dates back to the 1990s @cite_6 . State-of-the-art approaches include sequence labeling models @cite_7 using CRFs @cite_19 @cite_30 and max-margin structured models leveraging features @cite_20 @cite_4 @cite_3 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation @cite_39 @cite_29 @cite_24 @cite_33 . Our work belongs to recent word segmentation. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_24",
"@cite_20"
],
"mid": [
"2233994034",
"2250675174",
"2250623963",
"2163377725",
"2502289490",
"",
"",
"",
"2131417696",
"2143026224",
""
],
"abstract": [
"This paper is concerned with Chinese word segmentation, which is regarded as a character based tagging problem under conditional random field framework. It is different in our method that we consider both feature template selection and tag set selection, instead of feature template focused only method in existing work. Thus, there comes an empirical comparison study of performance among different tag sets in this paper. We show that there is a significant performance difference as different tag sets are selected. Based on the proposed method, our system gives the state-of-the-art performance.",
"There are two dominant approaches to Chinese word segmentation: word-based and character-based models, each with respective strengths. Prior work has shown that gains in segmentation performance can be achieved from combining these two types of models; however, past efforts have not provided a practical technique to allow mainstream adoption. We propose a method that effectively combines the strength of both segmentation schemes using an efficient dual-decomposition algorithm for joint inference. Our method is simple and easy to implement. Experiments on SIGHAN 2003 and 2005 evaluation datasets show that our method achieves the best reported results to date on 6 out of 7 datasets.",
"Nowadays supervised sequence labeling models can reach competitive performance on the task of Chinese word segmentation. However, the ability of these models is restricted by the availability of annotated data and the design of features. We propose a scalable semi-supervised feature engineering approach. In contrast to previous works using pre-defined taskspecific features with fixed values, we dynamically extract representations of label distributions from both an in-domain corpus and an out-of-domain corpus. We update the representation values with a semi-supervised approach. Experiments on the benchmark datasets show that our approach achieve good results and reach an f-score of 0.961. The feature engineering approach proposed here is a general iterative semi-supervised method and not limited to the word segmentation task.",
"In this paper we report results of a supervised machine-learning approach to Chinese word segmentation. A maximum entropy tagger is trained on manually annotated data to automatically assign to Chinese characters, or hanzi, tags that indicate the position of a hanzi within a word. The tagged output is then converted into segmented text for evaluation. Preliminary results show that this approach is competitive against other supervised machine-learning segmenters reported in previous studies, achieving precision and recall rates of 95.01 and 94.94 respectively, trained on a 237K-word training set.",
"This paper describes our system designed for the NLPCC 2016 shared task on word segmentation on micro-blog texts (i.e., Weibo). We treat word segmentation as a character-wise sequence labeling problem, and explore two directions to enhance our CRF-based baseline. First, we employ a large-scale external lexicon for constructing extra lexicon features in the model, which is proven to be extremely useful. Second, we exploit two heterogeneous datasets, i.e., Penn Chinese Treebank 7 (CTB7) and People Daily (PD) to help word segmentation on Weibo. We adopt two mainstream approaches, i.e., the guide-feature based approach and the recently proposed coupled sequence labeling approach. We combine the above techniques in different ways and obtain four well-performing models. Finally, we merge the outputs of the four models and obtain the final results via Viterbi-based re-decoding. On the test data of Weibo, our proposed approach outperforms the baseline by (95.63-94.24=1.39 ) in terms of F1 score. Our final system rank the first place among five participants in the open track in terms of F1 score, and is also the best among all 28 submissions. All codes, experiment configurations, and the external lexicon are released at http: hlt.suda.edu.cn zhli.",
"",
"",
"",
"We present a joint model for Chinese word segmentation and new word detection. We present high dimensional new features, including word-based features and enriched edge (label-transition) features, for the joint modeling. As we know, training a word segmentation system on large-scale datasets is already costly. In our case, adding high dimensional new features will further slow down the training speed. To solve this problem, we propose a new training method, adaptive online gradient descent based on feature frequency information, for very fast online training of the parameters, even given large-scale datasets with high dimensional features. Compared with existing training methods, our training method is an order magnitude faster in terms of training time, and can achieve equal or even higher accuracies. The proposed fast training method is a general purpose optimization method, and it is not limited in the specific task discussed in this paper.",
"We report an empirical investigation on type-supervised domain adaptation for joint Chinese word segmentation and POS-tagging, making use of domainspecific tag dictionaries and only unlabeled target domain data to improve target-domain accuracies, given a set of annotated source domain sentences. Previous work on POS-tagging of other languages showed that type-supervision can be a competitive alternative to tokensupervision, while semi-supervised techniques such as label propagation are important to the effectiveness of typesupervision. We report similar findings using a novel approach for joint Chinese segmentation and POS-tagging, under a cross-domain setting. With the help of unlabeled sentences and a lexicon of 3,000 words, we obtain 33 error reduction in target-domain tagging. In addition, combined type- and token-supervision can lead to improved cost-effectiveness.",
""
]
} |
1704.08424 | 2610780044 | Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment. | A different approach to learning word embeddings is through factorization of word co-occurrence matrices such as GloVe embeddings @cite_25 . The matrix factorization approach has been shown to have an implicit connection with skip-gram and negative sampling . Bayesian matrix factorization where row and columns are modeled as Gaussians has been explored in and provides a different probabilistic perspective of word embeddings. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2250539671"
],
"abstract": [
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."
]
} |
1704.08462 | 2608553328 | We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph @math that has @math vertices and the set of edges partitioned over @math sites, and an approximation ratio parameter @math . The output is required to be a matching in @math that has to be reported by one of the sites, whose size is at least factor @math of the size of a maximum matching in @math . We show that the communication complexity of this problem is @math information bits. This bound is shown to be tight up to a @math factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification. | The problem of finding an approximate maximum matching in a graph has been studied for various computation models, including the streaming computation model @cite_1 , MapReduce computation model @cite_27 @cite_12 , and a traditional distributed computation model known as @math computation model. | {
"cite_N": [
"@cite_27",
"@cite_1",
"@cite_12"
],
"mid": [
"2051586153",
"2080745194",
"2951758761"
],
"abstract": [
"In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms for processing data on terabyte and petabyte scales. Used daily at companies such as Yahoo!, Google, Amazon, and Facebook, and adopted more recently by several universities, it allows for easy parallelization of data intensive computations over many machines. One key feature of MapReduce that differentiates it from previous models of parallel computation is that it interleaves sequential and parallel computation. We propose a model of efficient computation using the MapReduce paradigm. Since MapReduce is designed for computations over massive data sets, our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, we place very loose restrictions on the computational power of of any individual machine---our model allows each machine to perform sequential computations in time polynomial in the size of the original input. We compare MapReduce to the PRAM model of computation. We prove a simulation lemma showing that a large class of PRAM algorithms can be efficiently simulated via MapReduce. The strength of MapReduce, however, lies in the fact that it uses both sequential and parallel computation. We demonstrate how algorithms can take advantage of this fact to compute an MST of a dense graph in only two rounds, as opposed to Ω(log(n)) rounds needed in the standard PRAM model. We show how to evaluate a wide class of functions using the MapReduce framework. We conclude by applying this result to show how to compute some basic algorithmic problems such as undirected s-t connectivity in the MapReduce framework.",
"The frequency moments of a sequence containingmielements of typei, 1?i?n, are the numbersFk=?ni=1mki. We consider the space complexity of randomized algorithms that approximate the numbersFk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbersF0,F1, andF2can be approximated in logarithmic space, whereas the approximation ofFkfork?6 requiresn?(1)space. Applications to data bases are mentioned as well.",
"In this paper, we study the MapReduce framework from an algorithmic standpoint and demonstrate the usefulness of our approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. This study is motivated by a goal of ultimately putting the MapReduce framework on an equal theoretical footing with the well-known PRAM and BSP parallel models, which would benefit both the theory and practice of MapReduce algorithms. We describe efficient MapReduce algorithms for sorting, multi-searching, and simulations of parallel algorithms specified in the BSP and CRCW PRAM models. We also provide some applications of these results to problems in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 2- and 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when mappers and reducers have a memory message-I O size of @math , for a small constant @math , all of our MapReduce algorithms for these applications run in a constant number of rounds."
]
} |
1704.08462 | 2608553328 | We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph @math that has @math vertices and the set of edges partitioned over @math sites, and an approximation ratio parameter @math . The output is required to be a matching in @math that has to be reported by one of the sites, whose size is at least factor @math of the size of a maximum matching in @math . We show that the communication complexity of this problem is @math information bits. This bound is shown to be tight up to a @math factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification. | In @cite_29 , the maximum matching was presented as one of open problems in the streaming computation model. Many results have been established since then by various authors @cite_22 , @cite_35 , @cite_36 , @cite_28 , @cite_17 , @cite_23 , @cite_25 , @cite_3 , @cite_7 , @cite_8 , and @cite_19 . Many of the studies were concerned with a streaming computation model that allows for @math space; referred to as the semi-streaming computation model. The algorithms developed for the semi-streaming computation model can be directly applied to obtain a constant-factor approximation of maximum matching in a graph in the message-passing model that has a communication cost of @math bits. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_25",
"@cite_17"
],
"mid": [
"1967311985",
"1629950647",
"2295466155",
"1514707655",
"1592346261",
"",
"",
"2952341094",
"1498807656",
"1871809785",
"2950256099",
"2762990162"
],
"abstract": [
"In this paper we study linear-programming based approaches to the maximum matching problem in the semi-streaming model. In this model edges are presented sequentially, possibly in an adversarial order, and we are only allowed to use a small space. The allowed space is near linear in the number of vertices (and sublinear in the number of edges) of the input graph. The semi-streaming model is relevant in the context of processing of very large graphs. In recent years, there have been several new and exciting results in the semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. In this paper we present several techniques to adapt and optimize linear-programming based approaches in the semi-streaming model. We use the maximum matching problem as a foil to demonstrate the effectiveness of adapting such tools in this model. As a consequence we improve almost all previous results on the semi-streaming maximum matching problem. We also prove new results on interesting variants.",
"In this paper, we study the non-bipartite maximum matching problem in the semi-streaming model. The maximum matching problem in the semi-streaming model has received a significant amount of attention lately. While the problem has been somewhat well solved for bipartite graphs, the known algorithms for non-bipartite graphs use @math passes or @math time to compute a @math approximation. In this paper we provide the first FPTAS (polynomial in @math ) for the problem which is efficient in both the running time and the number of passes. We also show that we can estimate the size of the matching in @math passes using slightly superlinear space. To achieve both results, we use the structural properties of the matching polytope such as the laminarity of the tight sets and total dual integrality. The algorithms are iterative, and are based on the fractional packing and covering framework. However the formulations herein require exponentially many variables or constraints. We use laminarity, metric embeddings and graph sparsification to reduce the space required by the algorithms in between and across the iterations. This is the first use of these ideas in the semi-streaming model to solve a combinatorial optimization problem.",
"We present a streaming algorithm that makes one pass over the edges of an unweighted graph presented in random order, and produces a polylogarithmic approximation to the size of the maximum matching in the graph, while using only polylogarithmic space. Prior to this work the only approximations known were a folklore O(√n) approximation with polylogarithmic space in an n vertex graph and a constant approximation with Ω(n) space. Our work thus gives the first algorithm where both the space and approximation factors are smaller than any polynomial in n. Our algorithm is obtained by effecting a streaming implementation of a simple \"local\" algorithm that we design for this problem. The local algorithm produces a O(k · n1 k) approximation to the size of a maximum matching by exploring the radius k neighborhoods of vertices, for any parameter k. We show, somewhat surprisingly, that our local algorithm can be implemented in the streaming setting even for k = Ω(log n log log n). Our analysis exposes some of the problems that arise in such conversions of local algorithms into streaming ones, and gives techniques to overcome such problems.",
"We present algorithms for finding large graph matchings in the streaming model. In this model, applicable when dealing with massive graphs, edges are streamed-in in some arbitrary order rather than residing in randomly accessible memory. For e>0, we achieve a @math approximation for maximum cardinality matching and a @math approximation to maximum weighted matching. Both algorithms use a constant number of passes and @math space.",
"We initiate the study of graph sketching, i.e., algorithms that use a limited number of linear measurements of a graph to determine the properties of the graph. While a graph on n nodes is essentially O(n2)-dimensional, we show the existence of a distribution over random projections into d-dimensional \"sketch\" space (d<< n2) such that the relevant properties of the original graph can be inferred from the sketch with high probability. Specifically, we show that: 1. d = O(n · polylog n) suffices to evaluate properties including connectivity, k-connectivity, bipartiteness, and to return any constant approximation of the weight of the minimum spanning tree. 2. d = O(n1+γ) suffices to compute graph sparsifiers, the exact MST, and approximate the maximum weighted matchings if we permit O(1 γ)-round adaptive sketches, i.e., a sequence of projections where each projection may be chosen dependent on the outcome of earlier sketches. Our results have two main applications, both of which have the potential to give rise to fruitful lines of further research. First, our results can be thought of as giving the first compressed-sensing style algorithms for graph data. Secondly, our work initiates the study of dynamic graph streams. There is already extensive literature on processing massive graphs in the data-stream model. However, the existing work focuses on graphs defined by a sequence of inserted edges and does not consider edge deletions. We think this is a curious omission given the existing work on both dynamic graphs in the non-streaming setting and dynamic geometric streaming. Our results include the first dynamic graph semi-streaming algorithms for connectivity, spanning trees, sparsification, and matching problems.",
"",
"",
"In this paper we present improved bounds for approximating maximum matchings in bipartite graphs in the streaming model. First, we consider the question of how well maximum matching can be approximated in a single pass over the input using O(n) @math n @math 1-1 e @math O(n) @math 1-1 e @math O(n) @math 1-1 e @math O(n) @math 1-e^ -k k^ k-1 (k-1)!=1- 2 k +o(1 k) @math k$ passes in the vertex arrival model using linear space, improving upon previously best known convergence.",
"We present an approximation algorithm to find a weighted matching of a graph in the one-pass semi-streaming model. The semi-streaming model forbids random access to the input graph and restricts the memory to @math bits where n denotes the number of the vertices of the input graph. We obtain an approximation ratio of 5.58 while the previously best algorithm achieves a ratio of 5.82.",
"We present three semi-streaming algorithms for Maximum Bipartite Matching with one and two passes. Our one-pass semi-streaming algorithm is deterministic and returns a matching of size at least 1 2 + 0.005 times the optimal matching size in expectation, assuming that edges arrive one by one in (uniform) random order. Our first two-pass algorithm is randomized and returns a matching of size at least 1 2 + 0.019 times the optimal matching size in expectation (over its internal random coin flips) for any arrival order. These two algorithms apply the simple Greedy matching algorithm several times on carefully chosen subgraphs as a subroutine. Furthermore, we present a two-pass deterministic algorithm for any arrival order returning a matching of size at least 1 2 + 0.019 times the optimal matching size. This algorithm is built on ideas from the computation of semi-matchings.",
"We consider the unweighted bipartite maximum matching problem in the one-pass turnstile streaming model where the input stream consists of edge insertions and deletions. In the insertion-only model, a one-pass @math -approximation streaming algorithm can be easily obtained with space @math , where @math denotes the number of vertices of the input graph. We show that no such result is possible if edge deletions are allowed, even if space @math is granted, for every @math . Specifically, for every @math , we show that in the one-pass turnstile streaming model, in order to compute a @math -approximation, space @math is required for constant error randomized algorithms, and, up to logarithmic factors, space @math is sufficient. Our lower bound result is proved in the simultaneous message model of communication and may be of independent interest.",
"We study the maximum weight matching problem in the semi-streaming model, and improve on the currently best one-pass algorithm due to Zelke [Proceedings of the 25th Annual Symposium on Theoretical Aspects of Computer Science, 2008, pp. 669–680] by devising a deterministic approach whose performance guarantee is 4.91+e. In addition, we study preemptive online algorithms, a class of algorithms related to one-pass semi-streaming algorithms, where we are allowed to maintain only a feasible matching in memory at any point in time. We provide a lower bound of 4.967 on the competitive ratio of any such deterministic algorithm, and hence show that future improvements will have to store in memory a set of edges that is not necessarily a feasible matching. We conclude by presenting an empirical study, conducted in order to compare the practical performance of our approach to that of previously suggested algorithms."
]
} |
1704.08462 | 2608553328 | We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph @math that has @math vertices and the set of edges partitioned over @math sites, and an approximation ratio parameter @math . The output is required to be a matching in @math that has to be reported by one of the sites, whose size is at least factor @math of the size of a maximum matching in @math . We show that the communication complexity of this problem is @math information bits. This bound is shown to be tight up to a @math factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification. | For approximate maximum matching problem in the MapReduce model, @cite_18 gave a @math -approximation algorithm, which requires a constant number of rounds and uses @math bits of communication, for any input graph with @math edges. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2153977620"
],
"abstract": [
"The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version."
]
} |
1704.08462 | 2608553328 | We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph @math that has @math vertices and the set of edges partitioned over @math sites, and an approximation ratio parameter @math . The output is required to be a matching in @math that has to be reported by one of the sites, whose size is at least factor @math of the size of a maximum matching in @math . We show that the communication complexity of this problem is @math information bits. This bound is shown to be tight up to a @math factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification. | In @cite_2 , it has been shown that the communication complexity of the problem in the message-passing model is @math bits. This work was independent and concurrent to ours. Incidentally, it uses a similar but different input distribution to ours. Similar input distributions were also used in previous work such as @cite_11 and @cite_13 . This is not surprising because of the nature of the message-passing model. There may exist a reduction between the @math -party set-disjointness and , but showing this is non-trivial and would require a formal proof. The proof of our lower bound is different in that we use a reduction of the @math -party to a @math -party set-disjointness using a symmetrisation argument, while @cite_2 uses a coordinative-wise direct-sum theorem to reduce the @math -party set-disjointness to a @math -party @math -bit problem. | {
"cite_N": [
"@cite_11",
"@cite_13",
"@cite_2"
],
"mid": [
"2568405002",
"2950318219",
"2085382249"
],
"abstract": [
"In this paper we prove lower bounds on randomized multiparty communication complexity, mainly in the message-passing model, where messages are sent player-to-player. Some of our results apply to the blackboard model, where each message is written on a blackboard for all players to see. We introduce a new technique for proving such bounds, called symmetrization, which is natural, intuitive, and often easy to use. For example, for the problem where each of @math players gets a bit-vector of length @math , and the goal is to compute the coordinatewise XOR of these vectors, we prove a tight lower bound of @math in the blackboard model. For the same problem with AND instead of XOR, we prove a lower bound of roughly @math in the message-passing model (assuming @math ) and @math in the blackboard model. We also prove lower bounds for bitwise majority, for a graph-connectivity problem, and for other problems; the technique seems applicable to a wide range of other problems as well. A...",
"We resolve several fundamental questions in the area of distributed functional monitoring, initiated by Cormode, Muthukrishnan, and Yi (SODA, 2008). In this model there are @math sites each tracking their input and communicating with a central coordinator that continuously maintain an approximate output to a function @math computed over the union of the inputs. The goal is to minimize the communication. We show the randomized communication complexity of estimating the number of distinct elements up to a @math factor is @math , improving the previous @math bound and matching known upper bounds up to a logarithmic factor. For the @math -th frequency moment @math , @math , we improve the previous @math communication bound to @math . We obtain similar improvements for heavy hitters, empirical entropy, and other problems. We also show that we can estimate @math , for any @math , using @math communication. This greatly improves upon the previous @math bound of Cormode, Muthukrishnan, and Yi for general @math , and their @math bound for @math . For @math , our bound resolves their main open question. Our lower bounds are based on new direct sum theorems for approximate majority, and yield significant improvements to problems in the data stream model, improving the bound for estimating @math in @math passes from @math to @math , giving the first bound for estimating @math in @math passes of @math bits of space that does not use the gap-hamming problem.",
"In a multiparty message-passing model of communication, there are k players. Each player has a private input, and they communicate by sending messages to one another over private channels. While this model has been used extensively in distributed computing and in secure multiparty computation, lower bounds on communication complexity in this model and related models have been somewhat scarce. In recent work [25], [29], [30], strong lower bounds of the form Ω(n·k) were obtained for several functions in the message-passing model; however, a lower bound on the classical set disjointness problem remained elusive. In this paper, we prove a tight lower bound of Ω(n · k) for the set disjointness problem in the message passing model. Our bound is obtained by developing information complexity tools for the message-passing model and proving an information complexity lower bound for set disjointness."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | A variety of zero-shot learning models have been proposed recently @cite_52 @cite_16 @cite_7 @cite_4 @cite_27 @cite_33 @cite_8 @cite_71 @cite_54 @cite_36 @cite_60 @cite_17 @cite_43 . They use various semantic spaces. Attribute space is the most widely used. However, for large-scale problems, annotating attributes for each class becomes difficult. Recently, semantic word vector space has started to gain popularity especially in large-scale zero-shot learning @cite_7 @cite_38 . Better scalability is typically the motivation as no manually defined ontology is required and any class name can be represented as a word vector for free. Beyond semantic attribute or word vector, direct learning from textual descriptions of categories has also been attempted, e.g. Wikipedia articles @cite_22 @cite_62 , sentence descriptions @cite_44 . | {
"cite_N": [
"@cite_38",
"@cite_62",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_60",
"@cite_54",
"@cite_22",
"@cite_52",
"@cite_44",
"@cite_43",
"@cite_27",
"@cite_71",
"@cite_16",
"@cite_17"
],
"mid": [
"2950276680",
"1487583988",
"652269744",
"",
"2123024445",
"",
"",
"1960364170",
"1492420801",
"",
"1858576077",
"",
"2962830213",
"",
"",
"2077071968",
"2950700180"
],
"abstract": [
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .",
"",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"",
"",
"Object recognition by zero-shot learning (ZSL) aims to recognise objects without seeing any visual examples by learning knowledge transfer between seen and unseen object classes. This is typically achieved by exploring a semantic embedding space such as attribute space or semantic word vector space. In such a space, both seen and unseen class labels, as well as image features can be embedded (projected), and the similarity between them can thus be measured directly. Existing works differ in what embedding space is used and how to project the visual data into the semantic embedding space. Yet, they all measure the similarity in the space using a conventional distance metric (e.g. cosine) that does not consider the rich intrinsic structure, i.e. semantic manifold, of the semantic categories in the embedding space. In this paper we propose to model the semantic manifold in an embedding space using a semantic class label graph. The semantic manifold structure is used to redefine the distance metric in the semantic embedding space for more effective ZSL. The proposed semantic manifold distance is computed using a novel absorbing Markov chain process (AMP), which has a very efficient closed-form solution. The proposed new model improves upon and seamlessly unifies various existing ZSL algorithms. Extensive experiments on both the large scale ImageNet dataset and the widely used Animal with Attribute (AwA) dataset show that our model outperforms significantly the state-of-the-arts.",
"This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.",
"",
"In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information ( attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.",
"",
"This paper addresses the task of zero-shot image classification. The key contribution of the proposed approach is to control the semantic embedding of images – one of the main ingredients of zero-shot learning – by formulating it as a metric learning problem. The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. This results in a novel expression of zero-shot learning not requiring the notion of class in the training phase: only pairs of image attributes, augmented with a consistency indicator, are given as ground truth. At test time, the learned model can predict the consistency of a test image with a given set of attributes, allowing flexible ways to produce recognition inferences. Despite its simplicity, the proposed approach gives state-of-the-art results on four challenging datasets used for zero-shot recognition evaluation.",
"",
"",
"While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study.",
"Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing image classifier and a semantic word embedding model, which contains the @math class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | Visual @math Semantic projection Existing ZSL models differ in how the visual space @math semantic space projection function is established. They can be divided into three groups: (1) Methods in the first group learn a projection function from a visual feature space to a semantic space either using conventional regression or ranking models @cite_19 @cite_27 or via deep neural network regression or ranking @cite_38 @cite_7 @cite_44 @cite_62 . (2) The second group chooses the reverse projection direction, i.e. semantic @math visual @cite_54 @cite_51 . The motivation is to alleviate the hubness problem that commonly suffered by nearest neighbour search in a high-dimensional space @cite_70 . (3) The third group of methods learn an intermediate space where both the feature space and the semantic space are projected to @cite_53 @cite_37 @cite_33 . The encoder in our model is similar to the first group of models, whilst the decoder does the same job as the second group. The proposed semantic autoencoder can thus be considered as a combination of the two groups of ZSL models but with the added visual feature reconstruction constraint. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_62",
"@cite_33",
"@cite_7",
"@cite_70",
"@cite_54",
"@cite_53",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_51"
],
"mid": [
"2950276680",
"2952567519",
"1487583988",
"",
"2123024445",
"",
"1492420801",
"782077188",
"",
"",
"",
"2209594346"
],
"abstract": [
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90 improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45 improvement accordingly in mean average precision (mAP).",
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"",
"This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.",
"The outputs of a trained neural network contain much richer information than just an one-hot classifier. For example, a neural network might give an image of a dog the probability of one in a million of being a cat but it is still much larger than the probability of being a car. To reveal the hidden structure in them, we apply two unsupervised learning algorithms, PCA and ICA, to the outputs of a deep Convolutional Neural Network trained on the ImageNet of 1000 classes. The PCA ICA embedding of the object classes reveals their visual similarity and the PCA ICA components can be interpreted as common visual features shared by similar object classes. For an application, we proposed a new zero-shot learning method, in which the visual features learned by PCA ICA are employed. Our zero-shot learning method achieves the state-of-the-art results on the ImageNet of over 20000 classes.",
"",
"",
"",
"Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | Projection domain shift The projection domain shift problem in ZSL was first identified by @cite_25 . In order to overcome this problem, a transductive multi-view embedding framework was proposed together with label propagation on graph which requires the access of all test data at once. Similar transdutive approaches are proposed in @cite_46 @cite_51 . This assumption is often invalid in the context of ZSL because new classes typically appear dynamically and unavailable before model learning. Instead of assuming the access to all test unseen class data for transductive learning, our model is based on inductive learning and it relies only enforcing the reconstruction constraint to the training data to counter domain shift. | {
"cite_N": [
"@cite_46",
"@cite_51",
"@cite_25"
],
"mid": [
"2151575489",
"2209594346",
"2141350700"
],
"abstract": [
"Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.",
"Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts.",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | There are many variants of autoencoders in the literature @cite_5 @cite_68 @cite_11 @cite_28 @cite_29 @cite_34 . They can be roughly divided into two groups which are (1) undercomplete autoencoders and (2) overcomplete autoencoders. In general, undercomplete autoencoders are used to learn the underlying structure of data and used for visualisation clustering @cite_21 like PCA. In contrast, overcomplete autoencoders are used for classification based on the assumption that higher dimensionnal features are better for classification @cite_40 @cite_58 @cite_10 . Our model is an undercomplete autoencoder since a semantic space typically has lower dimensionality than that of a visual feature space. All the autoencoders above focus on learning features in a unsupervised manner. On the contrary, our approach is supervised while keeping the main characteristic of the unsupervised autoencoders, i.e. the ability to reconstruct the input signal. | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_58",
"@cite_34",
"@cite_40",
"@cite_5",
"@cite_68",
"@cite_10",
"@cite_11"
],
"mid": [
"2025768430",
"2218318129",
"2964074409",
"2110798204",
"1498436455",
"2949821452",
"2078626246",
"",
"",
"2107789863"
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining.",
"Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.",
"Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.",
"We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.",
"Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.",
"Abstract We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed.",
"",
"",
"In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | An autoencoder is only one realisation of the encoder-decoder paradigm. Recently deep encoder-decoder has become popular for a variety of vision problems ranging from image segmentation @cite_24 to image synthesis @cite_18 @cite_47 . Among them, a few recent works also exploited the idea of applying semantic regularisation to the latent embedding space shared between the encoder and decoder @cite_18 @cite_47 . Our semantic autoencoder can be easily extended for end-to-end deep learning by formulating the encoder as a convolutional neural network and the decoder as a deconvolutional neural network with a reconstruction loss. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_47"
],
"mid": [
"2963881378",
"2963567641",
"2949999304"
],
"abstract": [
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions."
]
} |
1704.08345 | 2951313160 | Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. | Supervised clustering methods exploit labelled clustering training dataset to learn a projection matrix that is shared by a test dataset unlike conventional clustering such as @cite_31 @cite_42 . There are different approaches of learning the projection matrix: 1) metric learning-based methods that use similarity and dissimilarity constraints @cite_66 @cite_45 @cite_39 @cite_63 , and 2) regression-based methods that use labels' @cite_20 @cite_12 . Our method is more closely related to the regression-based methods, because the training class labels are used to constrain the latent embedding space in our semantic autoencoder. We demonstrate in Sec that, similar to the ZSL problem, by adding the reconstruction constraint, significant improvements can be achieved by our model on supervised clustering. | {
"cite_N": [
"@cite_42",
"@cite_39",
"@cite_45",
"@cite_63",
"@cite_12",
"@cite_31",
"@cite_66",
"@cite_20"
],
"mid": [
"2613425563",
"2117154949",
"2068042582",
"",
"2475245514",
"",
"49238428",
"2182206537"
],
"abstract": [
"",
"Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.",
"In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.",
"",
"Clustering is the task of grouping a set of objects so that objects in the same cluster are more similar to each other than to those in other clusters. The crucial step in most clustering algorithms is to find an appropriate similarity metric, which is both challenging and problem-dependent. Supervised clustering approaches, which can exploit labeled clustered training data that share a common metric with the test set, have thus been proposed. Unfortunately, current metric learning approaches for supervised clustering do not scale to large or even medium-sized datasets. In this paper, we propose a new structured Mahalanobis Distance Metric Learning method for supervised clustering. We formulate our problem as an instance of large margin structured prediction and prove that it can be solved very efficiently in closed-form. The complexity of our method is (in most cases) linear in the size of the training dataset. We further reveal a striking similarity between our approach and multivariate linear regression. Experiments on both synthetic and real datasets confirm several orders of magnitude speedup while still achieving state-of-the-art performance.",
"",
"We consider unsupervised partitioning problems based explicitly or implicitly on the minimization of Euclidean distortions, such as clustering, image or video segmentation, and other change-point detection problems. We emphasize on cases with specific structure, which include many practical situations ranging from mean-based change-point detection to image segmentation problems. We aim at learning a Mahalanobis metric for these unsupervised problems, leading to feature weighting and or selection. This is done in a supervised way by assuming the availability of several (partially) labeled datasets that share the same metric. We cast the metric learning problem as a large-margin structured prediction problem, with proper definition of regularizers and losses, leading to a convex optimization problem which can be solved efficiently. Our experiments show how learning the metric can significantly improve performance on bioinformatics, video or image segmentation problems.",
"We are interested in supervised metric learning of Mahalanobis like distances. Existing approaches mainly focus on learning a new distance using similarity and dissimilarity constraints between examples. In this paper, instead of bringing closer examples of the same class and pushing far away examples of different classes we propose to move the examples with respect to virtual points. Hence, each example is brought closer to a a priori defined virtual point reducing the number of constraints to satisfy. We show that our approach admits a closed form solution which can be kernelized. We provide a theoretical analysis showing the consistency of the approach and establishing some links with other classical metric learning methods. Furthermore we propose an efficient solution to the difficult problem of selecting virtual points based in part on recent works in optimal transport. Lastly, we evaluate our approach on several state of the art datasets."
]
} |
1704.08292 | 2611160234 | Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite works in computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluations demonstrate that our model has the ability to generate one modality, i.e., audio visual, from the other modality, i.e., visual audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space. | Generative Adversarial Networks (GANs) are introduced in the seminal work of @cite_19 , and consist of a generator network @math and a discriminator network @math . Given a distribution, @math is trained to generate samples that are resembled from this distribution, while @math is trained to distinguish whether the sample is genuine. They are trained in an adversarial fashion playing a min-max game against each other: where @math is the target data distribution and @math is drawn from a random noise distribution @math . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2099471712"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1704.08292 | 2611160234 | Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite works in computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks to achieve cross-modal audio-visual generation of musical performances. We explore different encoding methods for audio and visual signals, and work on two scenarios: instrument-oriented generation and pose-oriented generation. Being the first to explore this new problem, we compose two new datasets with pairs of images and sounds of musical performances of different instruments. Our experiments using both classification and human evaluations demonstrate that our model has the ability to generate one modality, i.e., audio visual, from the other modality, i.e., visual audio, to a good extent. Our experiments on various design choices along with the datasets will facilitate future research in this new problem space. | Conditional GANs @cite_2 @cite_32 are variants of GANs, where one is interested in directing the generation conditioned on some variables, e.g., labels in a dataset. It has the following form: where the only difference from GANs is the introduction of @math that represents the condition variable. This condition is passed to both the generator and the discriminator networks. One particular example is @cite_25 , where they use conditional GANs to generate images conditioned on text captions. The text captions are encoded through a recurrent neural network as in @cite_16 . In this paper, we use conditional GANs for cross-modal audio-visual generation. | {
"cite_N": [
"@cite_16",
"@cite_25",
"@cite_32",
"@cite_2"
],
"mid": [
"2951538594",
"2949999304",
"",
"2125389028"
],
"abstract": [
"State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch; i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech UCSD Birds 200-2011 dataset.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels."
]
} |
1704.08384 | 2896391192 | Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 @math points. Code and data available in this https URL | A majority of the QA literature that focused on exploiting KB and text either improves the inference on the KB using text based features @cite_14 @cite_50 @cite_42 @cite_48 @cite_7 @cite_29 @cite_51 @cite_36 @cite_8 @cite_4 or improves the inference on text using KB @cite_39 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_48",
"@cite_29",
"@cite_42",
"@cite_39",
"@cite_50",
"@cite_51"
],
"mid": [
"147290778",
"2341824259",
"2251079237",
"2250630028",
"",
"2148721079",
"2952155763",
"2110207985",
"1646084575",
"",
"1934264538"
],
"abstract": [
"We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependency-parsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-the-art accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80 precision and 56 recall, despite never having seen an annotated logical form.",
"One of the major challenges for automated question answering over Knowledge Bases (KBQA) is translating a natural language question to the Knowledge Base (KB) entities and predicates. Previous systems have used a limited amount of training data to learn a lexicon that is later used for question answering. This approach does not make use of other potentially relevant text data, outside the KB, which could supplement the available information. We introduce a new system, Text2KB, that enriches question answering over a knowledge base by using external text data. Specifically, we revisit different phases in the KBQA process and demonstrate that text resources improve question interpretation, candidate generation and ranking. Building on a state-of-the-art traditional KBQA system, Text2KB utilizes web search results, community question answering and general text document collection data, to detect question topic entities, map question phrases to KB predicates, and to enrich the features of the candidates derived from the KB. Text2KB significantly improves performance over the baseline KBQA method, as measured on a popular WebQuestions dataset. The results and insights developed in this work can guide future efforts on combining textual and structured KB data for question answering.",
"We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5 on the WEBQUESTIONS dataset.",
"We consider the problem of building scalable semantic parsers for Freebase, and present a new approach for learning to do partial analyses that ground as much of the input text as possible without requiring that all content words be mapped to Freebase concepts. We study this problem on two newly introduced large-scale noun phrase datasets, and present a new semantic parsing model and semi-supervised learning approach for reasoning with partial ontological support. Experiments demonstrate strong performance on two tasks: referring expression resolution and entity attribute extraction. In both cases, the partial analyses allow us to improve precision over strong baselines, while parsing many phrases that would be ignored by existing techniques.",
"",
"Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34 relative gain.",
"Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recursive neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11 , and a method leveraging pre-trained embeddings by 7 .",
"Much recent work focuses on formal interpretation of natural question utterances, with the goal of executing the resulting structured queries on knowledge graphs (KGs) such as Freebase. Here we address two limitations of this approach when applied to open-domain, entity-oriented Web queries. First, Web queries are rarely wellformed questions. They are “telegraphic”, with missing verbs, prepositions, clauses, case and phrase clues. Second, the KG is always incomplete, unable to directly answer many queries. We propose a novel technique to segment a telegraphic query and assign a coarse-grained purpose to each segment: a base entity e1, a relation type r, a target entity type t2, and contextual words s. The query seeks entity e2 2 t2 where r(e1,e2) holds, further evidenced by schema-agnostic words s. Query segmentation is integrated with the KG and an unstructured corpus where mentions of entities have been linked to the KG. We do not trust the best or any specific query segmentation. Instead, evidence in favor of candidate e2s are aggregated across several segmentations. Extensive experiments on the ClueWeb corpus and parts of Freebase as our KG, using over a thousand telegraphic queries adapted from TREC, INEX, and WebQuestions, show the efficacy of our approach. For one benchmark, MAP improves from 0.2‐0.29 (competitive baselines) to 0.42 (our system). NDCG@10 improves from 0.29‐0.36 to 0.54.",
"Most recent question answering (QA) systems query large-scale knowledge bases (KBs) to answer a question, after parsing and transforming natural language questions to KBs-executable forms (e.g., logical forms). As a well-known fact, KBs are far from complete, so that information required to answer questions may not always exist in KBs. In this paper, we develop a new QA system that mines answers directly from the Web, and meanwhile employs KBs as a significant auxiliary to further boost the QA performance. Specifically, to the best of our knowledge, we make the first attempt to link answer candidates to entities in Freebase, during answer candidate generation. Several remarkable advantages follow: (1) Redundancy among answer candidates is automatically reduced. (2) The types of an answer candidate can be effortlessly determined by those of its corresponding entity in Freebase. (3) Capitalizing on the rich information about entities in Freebase, we can develop semantic features for each answer candidate after linking them to Freebase. Particularly, we construct answer-type related features with two novel probabilistic models, which directly evaluate the appropriateness of an answer candidate's types under a given question. Overall, such semantic features turn out to play significant roles in determining the true answers from the large answer candidate pool. The experimental results show that across two testing datasets, our QA system achieves an 18 54 improvement under F_1 metric, compared with various existing QA systems.",
"",
"Path queries on a knowledge graph can be used to answer compositional questions such as \"What languages are spoken by people living in Lisbon?\". However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new \"compositional\" training objective, which dramatically improves all models' ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43 ) and achieving new state-of-the-art results."
]
} |
1704.08384 | 2896391192 | Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 @math points. Code and data available in this https URL | A few QA methods infer on curated databases combined with OpenIE triples @cite_23 @cite_43 @cite_44 . Our work differs from them in two ways: 1) we do not need an explicit database query to retrieve the answers @cite_46 @cite_53 ; and 2) our text-based facts retain complete sentential context unlike the OpenIE triples @cite_9 @cite_5 . | {
"cite_N": [
"@cite_53",
"@cite_9",
"@cite_44",
"@cite_43",
"@cite_23",
"@cite_5",
"@cite_46"
],
"mid": [
"2230472587",
"2127978399",
"",
"2295522710",
"2090243146",
"1512387364",
"2214429195"
],
"abstract": [
"We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.",
"To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.",
"",
"Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations. This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.",
"We consider the problem of open-domain question answering (Open QA) over massive knowledge bases (KBs). Existing approaches use either manually curated KBs like Freebase or KBs automatically extracted from unstructured text. In this paper, we present OQA, the first approach to leverage both curated and extracted KBs. A key technical challenge is designing systems that are robust to the high variability in both natural language questions and massive KBs. OQA achieves robustness by decomposing the full Open QA problem into smaller sub-problems including question paraphrasing and query reformulation. OQA solves these sub-problems by mining millions of rules from an unlabeled question corpus and across multiple KBs. OQA then learns to integrate these rules by performing discriminative training on question-answer pairs using a latent-variable structured perceptron algorithm. We evaluate OQA on three benchmark question sets and demonstrate that it achieves up to twice the precision and recall of a state-of-the-art Open QA system.",
"We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.",
"Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy."
]
} |
1704.08464 | 2759158682 | A ranking is an ordered sequence of items, in which an item with higher ranking score is more preferred than the items with lower ranking scores. In many information systems, rankings are widely used to represent the preferences over a set of items or candidates. The consensus measure of rankings is the problem of how to evaluate the degree to which the rankings agree. The consensus measure can be used to evaluate rankings in many information systems, as quite often there is not ground truth available for evaluation. This paper introduces a novel approach for consensus measure of rankings by using graph representation, in which the vertices or nodes are the items and the edges are the relationship of items in the rankings. Such representation leads to various algorithms for consensus measure in terms of different aspects of rankings, including the number of common patterns, the number of common patterns with fixed length and the length of the longest common patterns. The proposed measure can be adopted for various types of rankings, such as full rankings, partial rankings and rankings with ties. This paper demonstrates how the proposed approaches can be used to evaluate the quality of rank aggregation and the quality of top- @math rankings from Google and Bing search engines. | These functions are pairwise comparison and they can be transferred into consensus measure for a set @math of @math rankings by aggregating the pairwise distance values across all rankings. For example, one can use @math if the Kendall index is preferred. However, this aggregated result is not informative enough to tell the extend to which the rankings agree in @math , according to the study by @cite_23 . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2159414447"
],
"abstract": [
"This paper deals with the measurement of concordance and the construction of consensus in preference data, either in the form of preference rankings or in the form of response distributions with Likert-items. We propose a set of axioms of concordance in preference orderings and a new class of concordance measures. The measures outperform classic measures like Kendall's @t and W and Spearman's @r in sensitivity and apply to large sets of orderings instead of just to pairs of orderings. For sets of N orderings of n items, we present very efficient and flexible algorithms that have a time complexity of only O(Nn^2). Remarkably, the algorithms also allow for fast calculation of all longest common subsequences of the full set of orderings. We experimentally demonstrate the performance of the algorithms. A new and simple measure for assessing concordance on Likert-items is proposed."
]
} |
1704.08088 | 2609141386 | Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose. Linguistic features, mainly from parsers, have been used to detect MCI, but this is not suitable for large-scale assessments. MCI disfluencies produce non-grammatical speech that requires manual or high precision automatic correction of transcripts. In this paper, we modeled transcripts into complex networks and enriched them with word embedding (CNE) to better represent short texts produced in neuropsychological assessments. The network measurements were applied with well-known classifiers to automatically identify MCI in transcripts, in a binary classification task. A comparison was made with the performance of traditional approaches using Bag of Words (BoW) and linguistic features for three datasets: DementiaBank in English, and Cinderella and Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using only complex networks, while Support Vector Machine was superior to other classifiers. CNE provided the highest accuracies for DementiaBank and Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably owing to its short narratives. The approach using linguistic features yielded higher accuracy if the transcriptions of the Cinderella dataset were manually revised. Taken together, the results indicate that complex networks enriched with embedding is promising for detecting MCI in large-scale assessments | Detection of memory impairment has been based on linguistic, acoustic, and demographic features, in addition to scores of neuropsychological tests. Linguistic and acoustic features were used to automatically detect aphasia @cite_11 ; and AD @cite_10 or dementia @cite_2 in the public corpora of DementiaBank talkbank.org DementiaBank . Other studies distinguished different types of dementia @cite_8 @cite_0 , in which speech samples were elicited using the Picnic picture of the Western Aphasia Battery @cite_13 . also used the Picnic scene to detect MCI, where the subjects were asked to write (by hand) a detailed description of the scene. | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2101234009",
"2252171711",
"1529527003",
"1853705225",
"2063042856"
],
"abstract": [
"",
"Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http: scikit-learn.sourceforge.net.",
"This pilot study evaluates the ability of machined learned algorithms to assist with the differential diagnosis of dementia subtypes based on brief (< 10 min) spontaneous speech samples. We analyzed 1 recordings of a brief spontaneous speech sample from 48 participants from 5 different groups: 4 types of dementia plus healthy controls. Recordings were analyzed using a speech recognition system optimized for speakerindependent spontaneous speech. Lexical and acoustic features were automatically extracted. The resulting feature profiles were used as input to a machine learning system that was trained to identify the diagnosis assigned to each research participant. Between groups lexical and acoustic differences features were detected in accordance with expectations from prior research literature suggesting that classifications were based on features consistent with human-observed symptomatology. Machine learning algorithms were able to identify participants' diagnostic group with accuracy comparable to existing diagnostic methods in use today. Results suggest this clinical speech analytic approach offers promise as an additional, objective and easily obtained source of diagnostic information for clinicians.",
"BACKGROUND: Alzheimer's disease (AD) patients show early changes in white matter (WM) structural integrity. We studied the use of diffusion tensor imaging (DTI) in assessing WM alterations in the predementia stage of mild cognitive impairment (MCI). METHODS: We applied a Support Vector Machine (SVM) classifier to DTI and volumetric magnetic resonance imaging data from 35 amyloid-β42 negative MCI subjects (MCI-Aβ42-), 35 positive MCI subjects (MCI-Aβ42+), and 25 healthy controls (HC) retrieved from the European DTI Study on Dementia. The SVM was applied to DTI-derived fractional anisotropy, mean diffusivity (MD), and mode of anisotropy (MO) maps. For comparison, we studied classification based on gray matter (GM) and WM volume. RESULTS: We obtained accuracies of up to 68 for MO and 63 for GM volume when it came to distinguishing between MCI-Aβ42- and MCI-Aβ42+. When it came to separating MCI-Aβ42+ from HC we achieved an accuracy of up to 77 for MD and a significantly lower accuracy of 68 for GM volume. The accuracy of multimodal classification was not higher than the accuracy of the best single modality. CONCLUSIONS: Our results suggest that DTI data provide better prediction accuracy than GM volume in predementia AD.",
"Although memory impairment is the main symptom of Alzheimer's disease (AD), language impairment can be an important marker. Relatively few studies of language in AD quantify the impairments in connected speech using computational techniques.We aim to demonstrate state-of-the-art accuracy in automatically identifying Alzheimer's disease from short narrative samples elicited with a picture description task, and to uncover the salient linguistic factors with a statistical factor analysis.Data are derived from the DementiaBank corpus, from which 167 patients diagnosed with \"possible\" or \"probable\" AD provide 240 narrative samples, and 97 controls provide an additional 233. We compute a number of linguistic variables from the transcripts, and acoustic variables from the associated audio files, and use these variables to train a machine learning classifier to distinguish between participants with AD and healthy controls. To examine the degree of heterogeneity of linguistic impairments in AD, we follow an exploratory factor analysis on these measures of speech and language with an oblique promax rotation, and provide interpretation for the resulting factors.We obtain state-of-the-art classification accuracies of over 81 in distinguishing individuals with AD from those without based on short samples of their language on a picture description task. Four clear factors emerge: semantic impairment, acoustic abnormality, syntactic impairment, and information impairment.Modern machine learning and linguistic analysis will be increasingly useful in assessment and clustering of suspected AD.",
"Abstract In the early stages of neurodegenerative disorders, individuals may exhibit a decline in language abilities that is difficult to quantify with standardized tests. Careful analysis of connected speech can provide valuable information about a patient's language capacities. To date, this type of analysis has been limited by its time-consuming nature. In this study, we present a method for evaluating and classifying connected speech in primary progressive aphasia using computational techniques. Syntactic and semantic features were automatically extracted from transcriptions of narrative speech for three groups: semantic dementia (SD), progressive nonfluent aphasia (PNFA), and healthy controls. Features that varied significantly between the groups were used to train machine learning classifiers, which were then tested on held-out data. We achieved accuracies well above baseline on the three binary classification tasks. An analysis of the influential features showed that in contrast with controls, both patient groups tended to use words which were higher in frequency (especially nouns for SD, and verbs for PNFA). The SD patients also tended to use words (especially nouns) that were higher in familiarity, and they produced fewer nouns, but more demonstratives and adverbs, than controls. The speech of the PNFA group tended to be slower and incorporate shorter words than controls. The patient groups were distinguished from each other by the SD patients' relatively increased use of words which are high in frequency and or familiarity."
]
} |
1704.07820 | 2607568753 | We study unsupervised learning by developing introspective generative modeling (IGM) that attains a generator using progressively learned deep convolutional neural networks. The generator is itself a discriminator, capable of introspection: being able to self-evaluate the difference between its generated samples and the given training data. When followed by repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. IGM learns a cascade of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and semi-supervised learning. | Our introspective generative modeling (IGM) algorithm has connections to many existing approaches including the MinMax entropy work for texture modeling @cite_48 , the hybrid modeling work @cite_43 , and the self-supervised boosting algorithm @cite_8 . It builds on top of convolutional neural networks @cite_15 and we are particularly inspired by two lines of prior algorithms: the generative modeling via discriminative approach method (GDL) @cite_50 , and the DeepDream code @cite_49 and the neural artistic style work @cite_34 . The general pipeline of IGM is similar to that of GDL @cite_50 , with the boosting algorithm used in @cite_50 is replaced by a CNN in IGM. More importantly, the work of @cite_49 @cite_34 motives us to significantly improve the time-consuming sampling process in @cite_50 by an efficient SGD process via backpropagation (the reason for us to say all backpropagation''). Next, we review some existing generative image modeling work, followed by detailed discussions about the two most related algorithms: GDL @cite_50 and the recent development of generative adversarial networks (GAN) @cite_22 . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_48",
"@cite_43",
"@cite_50",
"@cite_49",
"@cite_15",
"@cite_34"
],
"mid": [
"",
"2103504567",
"2126174118",
"",
"2163176424",
"",
"2147800946",
"1924619199"
],
"abstract": [
"",
"Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of \"negative examples\" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.",
"This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.",
"",
"Generative model learning is one of the key problems in machine learning and computer vision. Currently the use of generative models is limited due to the difficulty in effectively learning them. A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches. This framework provides many interesting aspects to the literature. From the generative model side: (1) A reference distribution is used to assist the learning process, which removes the need for a sampling processes in the early stages. (2) The classification power of discriminative approaches, e.g. boosting, is directly utilized. (3) The ability to select explore features from a large candidate pool allows us to make nearly no assumptions about the training data. From the discriminative model side: (1) This framework improves the modeling capability of discriminative models. (2) It can start with source training data only and gradually \"invent\" negative samples. (3) We show how sampling schemes can be introduced to discriminative models. (4) The learning procedure helps to tighten the decision boundaries for classification, and therefore, improves robustness. In this paper, we show a variety of applications including texture modeling and classification, non-photorealistic rendering, learning image statistics denoising, and face modeling. The framework handles both homogeneous patterns, e.g. textures, and inhomogeneous patterns, e.g. faces, with nearly an identical parameter setting for all the tasks in the learning stage.",
"",
"The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.",
"In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery."
]
} |
1704.07820 | 2607568753 | We study unsupervised learning by developing introspective generative modeling (IGM) that attains a generator using progressively learned deep convolutional neural networks. The generator is itself a discriminator, capable of introspection: being able to self-evaluate the difference between its generated samples and the given training data. When followed by repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. IGM learns a cascade of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and semi-supervised learning. | The history of generative modeling on image or non-image domains is extremely rich, including the general image pattern theory @cite_19 , deformable models @cite_16 , inducing features @cite_18 , wake-sleep @cite_25 , the MiniMax entropy theory @cite_48 , the field of experts @cite_42 , Bayesian models @cite_9 , and deep belief nets @cite_12 . Each of these pioneering works points to some promising direction to unsupervised generative modeling. However the modeling power of these existing frameworks is still somewhat limited in computational and or representational aspects. In addition, not too many of them sufficiently explore the power of discriminative modeling. Recent works that adopt convolutional neural networks for generative modeling @cite_0 either use CNNs as a feature extractor or create separate paths @cite_6 @cite_13 . The neural artistic transferring work @cite_34 has demonstrated impressive results on the image transferring and texture synthesis tasks but it is focused @cite_34 on a careful study of channels attributed to artistic texture patterns, instead of aiming to build a generic image modeling framework. The self-supervised boosting work @cite_8 sequentially learns weak classifiers under boosting @cite_4 for density estimation, but its modeling power was not adequately demonstrated. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_6",
"@cite_34",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"1988790447",
"2103504567",
"2126174118",
"",
"2131686571",
"2524047397",
"1924619199",
"2949457404",
"2050777895",
"141043617",
"2952226636",
"1993845689",
"2136922672"
],
"abstract": [
"",
"In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.",
"Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of \"negative examples\" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.",
"This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.",
"",
"We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques.",
"This paper studies the cooperative training of two generative models for image modeling and synthesis. Both models are parametrized by convolutional neural networks (ConvNets). The first model is a deep energy-based model, whose energy function is defined by a bottom-up ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is a non-linear version of factor analysis. It is defined by a top-down ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model. After that, the generator model learns from how the MCMC changes its synthesized examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models.",
"In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.",
"We show that a generative random field model, which we call generative ConvNet, can be derived from the commonly used discriminative ConvNet, by assuming a ConvNet for multi-category classification and assuming one of the categories is a base category generated by a reference distribution. If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process. The Langevin dynamics for sampling the generative ConvNet is driven by the reconstruction error of this auto-encoder. The contrastive divergence learning of the generative ConvNet reconstructs the training images by the auto-encoder. The maximum likelihood learning algorithm can synthesize realistic natural image patterns.",
"Keywords: mathematiques ; structures Reference Record created on 2005-11-18, modified on 2016-08-08",
"",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.",
"An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up \"recognition\" connections convert the input into representations in successive hidden layers, and top-down \"generative\" connections reconstruct the representation in one layer from the representation in the layer above. In the \"wake\" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the \"sleep\" phase, neurons are driven by generative connections, and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above.",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind."
]
} |
1704.08152 | 2610153639 | Recent regulatory changes proposed by the Federal Communications Commission (FCC) permitting unlicensed use of television white space (TVWS) channels present new opportunities for designing wireless networks that make efficient use of this spectrum. The favorable propagation characteristics of these channels and their widespread availability, especially in rural areas, make them well-suited for providing broadband services in sparsely populated regions where economic factors hinder deployment of such services on licensed spectrum. In this context, this paper explores the deployment of an outdoor Wi-Fi-like network operating in TVWS channels, referred to commonly as a Super Wi-Fi network. Since regulations governing unlicensed use of these channels allow (a) mounting fixed devices up to a height of 30 m and operation at transmit powers of up to 4 W EIRP, and (b) operation at transmit powers of up to 100 mW EIRP for portable devices, such networks can provide extended coverage and higher rates than traditional Wi-Fi networks. However, these gains are subject to the viability of the uplink from the portable devices (clients) to the fixed devices (access points (AP)) because of tighter restrictions on transmit power of clients compared to APs. This paper leverages concepts from stochastic geometry to study the performance of such networks with specific focus on the effect of (a) transmit power asymmetry between APs and clients and its impact on uplink viability and coverage, and (b) the interplay between height and transmit power of APs in determining the network throughput. Such an analysis reveals that (a) maximum coverage of no more than 700 m is obtained even when APs are deployed at 30 m height, and (b) operating APs at transmit power of more than 1 W is beneficial only at sparse deployment densities when rate is prioritized over coverage. | Among the first efforts to theoretically analyze traditional Wi-Fi networks were the analyses presented in @cite_4 and @cite_23 to accurately model the 802.11 protocol. While these efforts captured finer aspects of the CSMA CA protocol (e.g., exponential backoff), spatial aspects of the wireless medium are not modeled. Stochastic geometry provides a natural framework to analyze wireless networks while retaining the spatial characteristics of signal propagation. The use of stochastic geometry for modeling and analyzing wireless networks started with the extensive analysis of ALOHA @cite_6 @cite_27 . In particular, @cite_27 studies CSMA-based networks using a Matern hard-core point processes where each AP was assumed to have a disc of fixed radius around itself within which there are no other APs. A modification to this analysis that modeled the backoff procedure in CSMA CA and included fading was presented in @cite_20 @cite_10 @cite_26 . The basic framework of @cite_20 to analyze CSMA CA forms the foundation for the current effort. Subsequent analysis of interference due to concurrent AP transmissions is modeled using the methodology proposed in @cite_9 @cite_28 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_20"
],
"mid": [
"2121822142",
"2131773444",
"2164097314",
"2133685514",
"",
"2137891392",
"2162598825",
"2028070124",
""
],
"abstract": [
"Stochastic geometry proves to be a powerful tool for modeling dense wireless networks adopting random MAC protocols such as ALOHA and CSMA. The main strength of this methodology lies in its ability to account for the randomness in the nodes' location jointly with an accurate description at the physical layer, based on the SINR, that allows to consider also random fading on each link. Existing models of CSMA networks adopting the stochastic geometry approach suffer from two important weaknesses: 1) they permit to evaluate only spatial averages of the main performance measures, thus hiding possibly huge discrepancies in the performance achieved by individual nodes; 2) they are analytically tractable only when nodes are distributed over the area according to simple spatial processes (e.g., the Poisson point process). In this paper we show how the stochastic geometry approach can be extended to overcome the above limitations, allowing to obtain node throughput distributions as well as to analyze a significant class of topologies in which nodes are not independently placed.",
"In WLAN the medium access control (MAC) protocol is the main element for determining the efficiency in sharing the limited communication bandwidth of the wireless channel. This paper focuses on the efficiency of the IEEE 802.11 standard for wireless LANs. Specifically, we derive an analytical formula for the protocol capacity. From this analysis we found (i) the theoretical upper bound of the IEEE 802.11 protocol capacity; (ii) that the standard can operate very far from the theoretical limits depending on the network configuration; (iii) that an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to its theoretical limits. Hence we propose a distributed algorithm which enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are investigated via simulation. The results indicate that the enhanced protocol is very close to the maximum theoretical efficiency.",
"In ad hoc networks, it may be helpful to suppress transmissions by nodes around the desired receiver in order to increase the likelihood of successful communication. This paper introduces the concept of a guard zone, defined as the region around each receiver where interfering transmissions are inhibited. Using stochastic geometry, the guard zone size that maximizes the transmission capacity for spread spectrum ad hoc networks is derived - narrowband transmission (spreading gain of unity) is a special case. A large guard zone naturally decreases the interference, but at the cost of inefficient spatial reuse. The derived results provide insight into the design of contention resolution algorithms by quantifying the optimal tradeoff between interference and spatial reuse in terms of the system parameters. A capacity increase relative to random access (ALOHA) in the range of 2 - 100 fold is demonstrated through an optimal guard zone; the capacity increase depending primarily on the required outage probability, as higher required QoS increasingly rewards scheduling. Compared to the ubiquitous carrier sense multiple access (CSMA) which essentially implements a guard zone around the transmitter rather than the receiver - we observe a capacity increase on the order of 30 - 100",
"Since interference is the main performance-limiting factor in most wireless networks, it is crucial to characterize the interference statistics. The two main determinants of the interference are the network geometry (spatial distribution of concurrently transmitting nodes) and the path loss law (signal attenuation with distance). For certain classes of node distributions, most notably Poisson point processes, and attenuation laws, closed-form results are available, for both the interference itself as well as the signal-to-interference ratios, which determine the network performance. This monograph presents an overview of these results and gives an introduction to the analytical techniques used in their derivation. The node distribution models range from lattices to homogeneous and clustered Poisson models to general motion-invariant ones. The analysis of the more general models requires the use of Palm theory, in particular conditional probability generating functionals, which are briefly introduced in the appendix.",
"",
"Spatial Aloha is probably the simplest medium access protocol to be used in a large mobile ad hoc network: each station tosses a coin independently of everything else and accesses the channel if it gets heads. In a network where stations are randomly and homogeneously located in the Euclidean plane, there is a way to tune the bias of the coin so as to obtain the best possible compromise between spatial reuse and per transmitter throughput. This paper shows how to address this questions using stochastic geometry and more precisely Poisson shot noise field theory. The theory that is developed is fully computational and leads to new closed form expressions for various kinds of spatial averages (like e.g. outage, throughput or transport). It also allows one to derive general scaling laws that hold for general fading assumptions. We exemplify its flexibility by analyzing a natural variant of Spatial Aloha that we call Opportunistic Aloha and that consists in replacing the coin tossing by an evaluation of the quality of the channel of each station to its receiver and a selection of the stations with good channels (e.g. fading) conditions. We show how to adapt the general machinery to this variant and how to optimize and implement it. We show that when properly tuned, Opportunistic Aloha very significantly outperforms Spatial Aloha, with e.g. a mean throughput per unit area twice higher for Rayleigh fading scenarios with typical parameters.",
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"For spectrum sharing and avoidance of mutual interference, carrier-sense multiple access (CSMA) protocols are very popular in distributed wireless networks. CSMA protocols aim to maximize the spatial frequency reuse while limiting the mutual interference and outage. The hard core point process (HCPP) is a very popular tool for modeling and analysis of random CSMA networks. However, the traditional HCPP suffers from the node intensity (and hence the interference) underestimation flaw. Therefore, we propose a modified hard core point process to mitigate this flaw. The proposed modified HCPP is generalized for any fading environment. To this end, we derive a closed-form expression for the intensity of simultaneously active transmitters in a random wireless CSMA network. Then, we derive a closed-form expression for approximating the outage probability experienced by a generic receiver in the network, and subsequently, use it to obtain the transmission capacity of the network. Finally, we show the existence of an optimal carrier-sensing threshold for the CSMA protocol that maximizes the transmission capacity of the network. Simulation results validate the analysis and also provide interesting insights into the design of practical CSMA networks.",
""
]
} |
1704.08152 | 2610153639 | Recent regulatory changes proposed by the Federal Communications Commission (FCC) permitting unlicensed use of television white space (TVWS) channels present new opportunities for designing wireless networks that make efficient use of this spectrum. The favorable propagation characteristics of these channels and their widespread availability, especially in rural areas, make them well-suited for providing broadband services in sparsely populated regions where economic factors hinder deployment of such services on licensed spectrum. In this context, this paper explores the deployment of an outdoor Wi-Fi-like network operating in TVWS channels, referred to commonly as a Super Wi-Fi network. Since regulations governing unlicensed use of these channels allow (a) mounting fixed devices up to a height of 30 m and operation at transmit powers of up to 4 W EIRP, and (b) operation at transmit powers of up to 100 mW EIRP for portable devices, such networks can provide extended coverage and higher rates than traditional Wi-Fi networks. However, these gains are subject to the viability of the uplink from the portable devices (clients) to the fixed devices (access points (AP)) because of tighter restrictions on transmit power of clients compared to APs. This paper leverages concepts from stochastic geometry to study the performance of such networks with specific focus on the effect of (a) transmit power asymmetry between APs and clients and its impact on uplink viability and coverage, and (b) the interplay between height and transmit power of APs in determining the network throughput. Such an analysis reveals that (a) maximum coverage of no more than 700 m is obtained even when APs are deployed at 30 m height, and (b) operating APs at transmit power of more than 1 W is beneficial only at sparse deployment densities when rate is prioritized over coverage. | A comprehensive overview of using stochastic geometry to model a wide variety of wireless networks is given in @cite_17 @cite_11 . The mathematical tools and theory of point processes used in the current analysis are presented in @cite_2 and @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_11",
"@cite_2",
"@cite_17"
],
"mid": [
"2118166339",
"2145873277",
"2006166592",
"2039688938"
],
"abstract": [
"Mathematical Foundation. Point Processes I--The Poisson Point Process. Random Closed Sets I--The Boolean Model. Point Processes II--General Theory. Point Processes III--Construction of Models. Random Closed Sets II--The General Case. Random Measures. Random Processes of Geometrical Objects. Fibre and Surface Processes. Random Tessellations. Stereology. References. Indexes.",
"Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.",
"This volume bears on wireless network modeling and performance analysis. The aim is to show how stochastic geometry can be used in a more or less systematic way to analyze the phenomena that arise in this context. It first focuses on medium access control mechanisms used in ad hoc networks and in cellular networks. It then discusses the use of stochastic geometry for the quantitative analysis of routing algorithms in mobile ad hoc networks. The appendix also contains a concise summary of wireless communication principles and of the network architectures considered in the two volumes.",
"For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.