aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1704.07816
2751127531
We propose introspective convolutional networks (ICN) that emphasize the importance of having convolutional neural networks empowered with generative capabilities. We employ a reclassification-by-synthesis algorithm to perform training using a formulation stemmed from the Bayes theory. Our ICN tries to iteratively: (1) synthesize pseudo-negative samples; and (2) enhance itself by improving the classification. The single CNN classifier learned is at the same time generative --- being able to directly synthesize new samples within its own discriminative model. We conduct experiments on benchmark datasets including MNIST, CIFAR-10, and SVHN using state-of-the-art CNN architectures, and observe improved classification results.
Our ICN method is directly related to the generative via discriminative learning framework @cite_52 . It also has connection to the self-supervised learning method @cite_7 , which is focused on density estimation by combining weak classifiers. Previous algorithms connecting generative modeling with discriminative classification @cite_40 @cite_36 @cite_39 @cite_19 fall in the category of hybrid models that are direct combinations of the two. Some existing works on introspective learning @cite_48 @cite_50 @cite_30 have a different scope to the problem being tackled here. Other generative modeling schemes such as MiniMax entropy @cite_22 , inducing features @cite_16 , auto-encoder @cite_18 , and recent CNN-based generative modeling approaches @cite_0 @cite_5 are not for discriminative classification and they do not have a single model that is both generative and discriminative. Below we discuss the two methods most related to ICN, namely generative via discriminative learning (GDL) @cite_52 and generative adversarial networks (GAN) @cite_20 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_7", "@cite_36", "@cite_48", "@cite_52", "@cite_39", "@cite_0", "@cite_19", "@cite_40", "@cite_50", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "2952943736", "2617585083", "2126174118", "2103504567", "2156346614", "", "2163176424", "2126043693", "2949457404", "", "", "2964144352", "2524047397", "", "" ], "abstract": [ "Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.", "Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.", "This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.", "Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of \"negative examples\" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.", "Statistical and computational concerns have motivated parameter estimators based on various forms of likelihood, e.g., joint, conditional, and pseudolikelihood. In this paper, we present a unified framework for studying these estimators, which allows us to compare their relative (statistical) efficiencies. Our asymptotic analysis suggests that modeling more of the data tends to reduce variance, but at the cost of being more sensitive to model misspecification. We present experiments validating our analysis.", "", "Generative model learning is one of the key problems in machine learning and computer vision. Currently the use of generative models is limited due to the difficulty in effectively learning them. A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches. This framework provides many interesting aspects to the literature. From the generative model side: (1) A reference distribution is used to assist the learning process, which removes the need for a sampling processes in the early stages. (2) The classification power of discriminative approaches, e.g. boosting, is directly utilized. (3) The ability to select explore features from a large candidate pool allows us to make nearly no assumptions about the training data. From the discriminative model side: (1) This framework improves the modeling capability of discriminative models. (2) It can start with source training data only and gradually \"invent\" negative samples. (3) We show how sampling schemes can be introduced to discriminative models. (4) The learning procedure helps to tighten the decision boundaries for classification, and therefore, improves robustness. In this paper, we show a variety of applications including texture modeling and classification, non-photorealistic rendering, learning image statistics denoising, and face modeling. The framework handles both homogeneous patterns, e.g. textures, and inhomogeneous patterns, e.g. faces, with nearly an identical parameter setting for all the tasks in the learning stage.", "In this paper, a hybrid discriminative generative model for brain anatomical structure segmentation is proposed. The learning aspect of the approach is emphasized. In the discriminative appearance models, various cues such as intensity and curvatures are combined to locally capture the complex appearances of different anatomical structures. A probabilistic boosting tree (PBT) framework is adopted to learn multiclass discriminative models that combine hundreds of features across different scales. On the generative model side, both global and local shape models are used to capture the shape information about each anatomical structure. The parameters to combine the discriminative appearance and generative shape models are also automatically learned. Thus, low-level and high-level information is learned and integrated in a hybrid model. Segmentations are obtained by minimizing an energy function associated with the proposed hybrid model. Finally, a grid-face structure is designed to explicitly represent the 3-D region topology. This representation handles an arbitrary number of regions and facilitates fast surface evolution. Our system was trained and tested on a set of 3-D magnetic resonance imaging (MRI) volumes and the results obtained are encouraging.", "We show that a generative random field model, which we call generative ConvNet, can be derived from the commonly used discriminative ConvNet, by assuming a ConvNet for multi-category classification and assuming one of the categories is a base category generated by a reference distribution. If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process. The Langevin dynamics for sampling the generative ConvNet is driven by the reconstruction error of this auto-encoder. The contrastive divergence learning of the generative ConvNet reconstructs the training images by the auto-encoder. The maximum likelihood learning algorithm can synthesize realistic natural image patterns.", "", "", "The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.", "This paper studies the cooperative training of two generative models for image modeling and synthesis. Both models are parametrized by convolutional neural networks (ConvNets). The first model is a deep energy-based model, whose energy function is defined by a bottom-up ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is a non-linear version of factor analysis. It is defined by a top-down ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model. After that, the generator model learns from how the MCMC changes its synthesized examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models.", "", "" ] }
1704.07816
2751127531
We propose introspective convolutional networks (ICN) that emphasize the importance of having convolutional neural networks empowered with generative capabilities. We employ a reclassification-by-synthesis algorithm to perform training using a formulation stemmed from the Bayes theory. Our ICN tries to iteratively: (1) synthesize pseudo-negative samples; and (2) enhance itself by improving the classification. The single CNN classifier learned is at the same time generative --- being able to directly synthesize new samples within its own discriminative model. We conduct experiments on benchmark datasets including MNIST, CIFAR-10, and SVHN using state-of-the-art CNN architectures, and observe improved classification results.
Later developments alongside GAN @cite_38 @cite_49 @cite_4 @cite_50 share some similar aspects to GAN, which also do not achieve the same goal as ICN does. Since the discriminator in GAN is not meant to perform the generic two-class multi-class classification task, some special settings for semi-supervised learning @cite_20 @cite_38 @cite_4 @cite_50 @cite_49 were created. ICN instead has a single model that is both generative and discriminative, and thus, an improvement to ICN's generator leads to a direct means to ameliorate its discriminator. Other work like @cite_35 was motivated from an observation that adding small perturbations to an image leads to classification errors that are absurd to humans; their approach is however taken by augmenting positive samples from existing input whereas ICN is able to synthesize new samples from scratch. A recent work proposed in @cite_12 is in the same family of ICN, but @cite_12 focuses on unsupervised image modeling using a cascade of CNNs.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_4", "@cite_50", "@cite_49", "@cite_12", "@cite_20" ], "mid": [ "2173520492", "1945616565", "2099471712", "2964144352", "2432004435", "2780786077", "" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "We study unsupervised learning by developing a generative model built from progressively learned deep convolutional neural networks. The resulting generator is additionally a discriminator, capable of \"introspection\" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Through repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. Specifically, our model learns a sequence of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and unsupervised feature learning.", "" ] }
1704.07427
2610149160
Wikipedia is a useful knowledge source that benefits many applications in language processing and knowledge representation. An important feature of Wikipedia is that of categories. Wikipedia pages are assigned different categories according to their contents as human-annotated labels which can be used in information retrieval, ad hoc search improvements, entity ranking and tag recommendations. However, important pages are usually assigned too many categories, which makes it difficult to recognize the most important ones that give the best descriptions. In this paper, we propose an approach to recognize the most descriptive Wikipedia categories. We observe that historical figures in a precise category presumably are mutually similar and such categorical coherence could be evaluated via texts or Wikipedia links of corresponding members in the category. We rank descriptive level of Wikipedia categories according to their coherence and our ranking yield an overall agreement of 88.27 compared with human wisdom.
Entity ranking gains popularity since better rankings boost the performances of search engines, resulting in faster and more precise information retrieval. Wikipedia seems to be a good playground. The problem of ranking web pages could be easily reduced to Wikipedia entity ranking, plus that Wikipedia has a large collection of entities of different types @cite_12 and Wikipedia contains valuable texts, human annotated tags, enriched links and a great structure to analyze ranking effectiveness. Certain ranking can serve as a pivot for extensibility or analysis @cite_5 @cite_2 or be used to answer queries in named entity recognition @cite_8 . Additionally, retrieving real-life knowledge of reputations, fames and historical significance from entity ranking is also valuable @cite_13 .
{ "cite_N": [ "@cite_8", "@cite_2", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "", "2120920763", "2128549493", "2295002412", "2147268363" ], "abstract": [ "", "This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation. © 2011 Wiley Periodicals, Inc.", "In this paper we investigate the task of Entity Ranking on the Web. Searchers looking for entities are arguably better served by presenting a ranked list of entities directly, rather than a list of web pages with relevant but also potentially redundant information about these entities. Since entities are represented by their web homepages, a naive approach to entity ranking is to use standard text retrieval. Our experimental results clearly demonstrate that text retrieval is effective at finding relevant pages, but performs poorly at finding entities. Our proposal is to use Wikipedia as a pivot for finding entities on the Web, allowing us to reduce the hard web entity ranking problem to easier problem of Wikipedia entity ranking. Wikipedia allows us to properly identify entities and some of their characteristics, and Wikipedia's elaborate category structure allows us to get a handle on the entity's type. Our main findings are the following. Our first finding is that, in principle, the problem of web entity ranking can be reduced to Wikipedia entity ranking. We found that the majority of entity ranking topics in our test collections can be answered using Wikipedia, and that with high precision relevant web entities corresponding to the Wikipedia entities can be found using Wikipedia's 'external links'. Our second finding is that we can exploit the structure of Wikipedia to improve entity ranking effectiveness. Entity types are valuable retrieval cues in Wikipedia. Automatically assigned entity types are effective, and almost as good as manually assigned types. Our third finding is that web entity retrieval can be significantly improved by using Wikipedia as a pivot. Both Wikipedia's external links and the enriched Wikipedia entities with additional links to homepages are significantly better at finding primary web homepages than anchor text retrieval, which in turn significantly improved over standard text retrieval.", "Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Did you know: - Got a spare billion dollars, and want to be remembered forever? Your best investment is to get a university named after you. - Women remain significantly underrepresented in the historical record compared to men and have long required substantially greater achievement levels to get equally noted for posterity. - The long-term prominence of Elvis Presley rivals that of the most famous classical composers. Roll over Beethoven, and tell Tchaikovsky the news! Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things. While revisiting old historical friends and making new ones, you will come to understand the forces that shape historical recognition in a whole new light.", "We discuss the problem of ranking very many entities of different types. In particular we deal with a heterogeneous set of types, some being very generic and some very specific. We discuss two approaches for this problem: i) exploiting the entity containment graph and ii) using a Web search engine to compute entity relevance. We evaluate these approaches on the real task of ranking Wikipedia entities typed with a state-of-the-art named-entity tagger. Results show that both approaches can greatly increase the performance of methods based only on passage retrieval." ] }
1704.07427
2610149160
Wikipedia is a useful knowledge source that benefits many applications in language processing and knowledge representation. An important feature of Wikipedia is that of categories. Wikipedia pages are assigned different categories according to their contents as human-annotated labels which can be used in information retrieval, ad hoc search improvements, entity ranking and tag recommendations. However, important pages are usually assigned too many categories, which makes it difficult to recognize the most important ones that give the best descriptions. In this paper, we propose an approach to recognize the most descriptive Wikipedia categories. We observe that historical figures in a precise category presumably are mutually similar and such categorical coherence could be evaluated via texts or Wikipedia links of corresponding members in the category. We rank descriptive level of Wikipedia categories according to their coherence and our ranking yield an overall agreement of 88.27 compared with human wisdom.
Traditional ranking algorithms on Wikipedia basically consider two parts. One part focuses on information provided by raw text, including length of pages, word occurrences and topic distributions. LDA is among the most valuable approaches in such tasks @cite_9 . Topics from LDA highly agree with real tags when finding most important feature words of a page @cite_11 . The other part of ranking criteria relies heavily on links. Representatives include PageRank @cite_0 and HITS @cite_4 . PageRank is a link analysis algorithm that assigns high numerical weighting to pages that are referred to by many other pages and the structure of weight distribution conclude the importance of web pages. HITS defines hubs to be pages that have links pointing to many authority pages, serving as another important criteria in ranking. Recent work of Deepwalk @cite_1 uses truncated random walks to learn latent representations by encoding social relations in a continuous vector space, which can be easily exploited by statistical models.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_0", "@cite_11" ], "mid": [ "2138621811", "1880262756", "2154851992", "2066636486", "2031237011" ], "abstract": [ "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.", "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources." ] }
1704.07505
2608317659
We present a dynamic model selection approach for resource-constrained prediction. Given an input instance at test-time, a gating function identifies a prediction model for the input among a collection of models. Our objective is to minimize overall average cost without sacrificing accuracy. We learn gating and prediction models on fully labeled training data by means of a bottom-up strategy. Our novel bottom-up method is a recursive scheme whereby a high-accuracy complex model is first trained. Then a low-complexity gating and prediction model are subsequently learnt to adaptively approximate the high-accuracy model in regions where low-cost models are capable of making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.
The teacher-student framework @cite_15 is also related to our bottom-up approach; a low-cost student model learns to approximate the teacher model so as to meet test-time budget. However, the goal there is to learn a better stand-alone student model. In contrast, we make use of both the low-cost (student) and high-accuracy (teacher) model during prediction via a gating function, which learns the limitation of the low-cost (student) model and consult the high-accuracy (teacher) model if necessary, thereby avoiding accuracy loss.
{ "cite_N": [ "@cite_15" ], "mid": [ "2253986341" ], "abstract": [ "Distillation (, 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines. This paper unifies these two techniques into generalized distillation, a framework to learn from multiple machines and data representations. We provide theoretical and causal insight about the inner workings of generalized distillation, extend it to unsupervised, semisupervised and multitask learning scenarios, and illustrate its efficacy on a variety of numerical simulations on both synthetic and real-world data." ] }
1704.07475
2964222971
We study the problem of reducing the amount of communication in decentralized target tracking. We focus on the scenario, where a team of robots is allowed to move on the boundary of the environment. Their goal is to seek a formation so as to best track a target moving in the interior of the environment. The robots are capable of measuring distances to the target. Decentralized control strategies have been proposed in the past, which guarantees that the robots asymptotically converge to the optimal formation. However, existing methods require that the robots exchange information with their neighbors at all time steps. Instead, we focus on decentralized strategies to reduce the amount of communication among robots. We propose a self-triggered communication strategy that decides when a particular robot should seek up-to-date information from its neighbors and when it is safe to operate with possibly outdated information. We prove that this strategy converges asymptotically to the desired formation when the target is stationary. For the case of a mobile target, we use a decentralized Kalman filter with covariance intersection to share the beliefs of neighboring robots. We evaluate all the approaches through simulations and a proof-of-concept experiment. Note to Practitioners —We study the problem of tracking a target using a team of coordinating robots. Target tracking problems are prevalent in a number of applications, such as co-robots, surveillance, and wildlife monitoring. Coordination between robots typically requires communication amongst them. Most multi-robot coordination algorithms implicitly assume that the robots can communicate at all time steps. Communication can be a considerable source of energy consumption, especially for small robots. Furthermore, communicating at all time steps may be redundant in many settings. With this as motivation, we propose an algorithm where the robots do not necessarily communicate at all times and instead choose specific triggering time instances to share information with their neighbors. Despite the limitation of limited communication, we show that the algorithm converges to the optimal configuration both in theory as well as in simulations.
Multi-robot target tracking has been widely studied in robotics @cite_4 @cite_1 . Robin and Lacroix @cite_4 surveyed multi-robot target detection and tracking systems and presented a taxonomy of relevant works. @cite_1 classified and discussed control techniques for multi-robot multi-target monitoring and identify the major elements of this problem. @cite_24 proposed a centralized cooperative approach for a team of robots to estimate a moving target. They showed how to use onboard sensing with limited sensing range and switch the sensor topology for effective target tracking. @cite_11 proposed a multi-robot triangulation method to deal with initialization and data association issues in bearing-only sensors. The robot communicates locally to exchange and update the estimate beliefs of the target by a decentralized filter. @cite_22 presented a decentralized strategy to ensure that the robots follow the target while moving around it in a circle. They assume that the robots are labeled. Similar to our work, the robots attempt to maintain a uniform distribution on a (moving) circle around the target. However, unlike our work, they require that the robots constantly communicate with their local neighbors.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_1", "@cite_24", "@cite_11" ], "mid": [ "1126974217", "1862013120", "", "615006660", "1530018653" ], "abstract": [ "Target detection and tracking encompasses a variety of decisional problems such as coverage, surveillance, search, patrolling, observing and pursuit-evasion along with others. These problems are studied by several communities, that tackle them using diverse formulations, hypotheses and approaches. This variety and the fact that target related robotics problems are pertinent for a large spectrum of applications has motivated a large amount of contributions, which have mostly been surveyed according to one or another viewpoint. In this article, our objective is to go beyond the frontiers of specific communities and specific problems, and to enlarge the scope of prior surveys. We define classes of missions and problems, and relate the results from various communities according to a unifying taxonomy. We review various work related to each class of problems identified in the taxonomy, highlighting the different approaches, models and results. Finally, we propose a transverse synthesis which analyses the approaches, models and lacks that are recurrent through all the tackled problems, and isolate the current main research directions.", "We present a control framework for achieving encirclement of a target moving in 3D using a multi-robot system. Three variations of a basic control strategy are proposed for different versions of the encirclement problem, and their effectiveness is formally established. An extension ensuring maintenance of a safe inter-robot distance is also discussed. The proposed framework is fully decentralized and only requires local communication among robots; in particular, each robot locally estimates all the relevant global quantities. We validate the proposed strategy through simulations on kinematic point robots and quadrotor UAVs, as well as experiments on differential-drive wheeled mobile robots.", "", "We consider the cooperative control of a team of robots to estimate the position of a moving target using onboard sensing. In this setting, robots are required to estimate their positions using relative onboard sensing while concurrently tracking the target. Our probabilistic localization and control method takes into account the motion and sensing capabilities of the individual robots to minimize the expected future uncertainty of the target position. Two measures of uncertainty are extensively evaluated and compared: mutual information and the trace of the extended Kalman filter covariance. Our approach reasons about multiple possible sensing topologies and incorporates an efficient topology switching technique to generate locally optimal controls in polynomial time complexity. Simulations illustrate the performance of our approach and prove its flexibility in finding suitable sensing topologies depending on the limited sensing capabilities of the robots and the movements of the target. Furthermore, we demonstrate the applicability of our method in various experiments with single and multiple quadrotor robots tracking a ground vehicle in an indoor environment.", "Target tracking with bearing-only sensors is a challenging problem when the target moves dynamically in complex scenarios. Besides the partial observability of such sensors, they have limited field of views, occlusions can occur, etc. In those cases, cooperative approaches with multiple tracking robots are interesting, but the different sources of uncertain information need to be considered appropriately in order to achieve better estimates. Even though there exist probabilistic filters that can estimate the position of a target dealing with uncertainties, bearing-only measurements bring usually additional problems with initialization and data association. In this paper, we propose a multi-robot triangulation method with a dynamic baseline that can triangulate bearing-only measurements in a probabilistic manner to produce 3D observations. This method is combined with a decentralized stochastic filter and used to tackle those initialization and data association issues. The approach is validated with simulations and field experiments where a team of aerial and ground robots with cameras track a dynamic target." ] }
1704.07475
2964222971
We study the problem of reducing the amount of communication in decentralized target tracking. We focus on the scenario, where a team of robots is allowed to move on the boundary of the environment. Their goal is to seek a formation so as to best track a target moving in the interior of the environment. The robots are capable of measuring distances to the target. Decentralized control strategies have been proposed in the past, which guarantees that the robots asymptotically converge to the optimal formation. However, existing methods require that the robots exchange information with their neighbors at all time steps. Instead, we focus on decentralized strategies to reduce the amount of communication among robots. We propose a self-triggered communication strategy that decides when a particular robot should seek up-to-date information from its neighbors and when it is safe to operate with possibly outdated information. We prove that this strategy converges asymptotically to the desired formation when the target is stationary. For the case of a mobile target, we use a decentralized Kalman filter with covariance intersection to share the beliefs of neighboring robots. We evaluate all the approaches through simulations and a proof-of-concept experiment. Note to Practitioners —We study the problem of tracking a target using a team of coordinating robots. Target tracking problems are prevalent in a number of applications, such as co-robots, surveillance, and wildlife monitoring. Coordination between robots typically requires communication amongst them. Most multi-robot coordination algorithms implicitly assume that the robots can communicate at all time steps. Communication can be a considerable source of energy consumption, especially for small robots. Furthermore, communicating at all time steps may be redundant in many settings. With this as motivation, we propose an algorithm where the robots do not necessarily communicate at all times and instead choose specific triggering time instances to share information with their neighbors. Despite the limitation of limited communication, we show that the algorithm converges to the optimal configuration both in theory as well as in simulations.
@cite_3 proposed a distributed approach for multi-robot assignment problem for multi-target tracking by taking both sensing and communication ranges into account. The goal of their work is also to limit the communication between the robots. However, they do so by limiting the number of messages sent at each timestep but allow the robots to communicate at all timesteps. Instead, our work explicitly determines when to trigger communication with other robots.
{ "cite_N": [ "@cite_3" ], "mid": [ "2621698318" ], "abstract": [ "We study a multi-robot assignment problem for multi-target tracking. The proposed problem can be viewed as the mixed packing and covering problem. To deal with a limitation on both sensing and communication ranges, a distributed approach is taken into consideration. A local algorithm gives theoretical bounds on both the running time and approximation ratio to an optimal solution. We employ a local algorithm of max-min linear programs to solve the proposed task. Simulation result shows that a local algorithm is an effective solution to the multi-robot task allocation." ] }
1704.07475
2964222971
We study the problem of reducing the amount of communication in decentralized target tracking. We focus on the scenario, where a team of robots is allowed to move on the boundary of the environment. Their goal is to seek a formation so as to best track a target moving in the interior of the environment. The robots are capable of measuring distances to the target. Decentralized control strategies have been proposed in the past, which guarantees that the robots asymptotically converge to the optimal formation. However, existing methods require that the robots exchange information with their neighbors at all time steps. Instead, we focus on decentralized strategies to reduce the amount of communication among robots. We propose a self-triggered communication strategy that decides when a particular robot should seek up-to-date information from its neighbors and when it is safe to operate with possibly outdated information. We prove that this strategy converges asymptotically to the desired formation when the target is stationary. For the case of a mobile target, we use a decentralized Kalman filter with covariance intersection to share the beliefs of neighboring robots. We evaluate all the approaches through simulations and a proof-of-concept experiment. Note to Practitioners —We study the problem of tracking a target using a team of coordinating robots. Target tracking problems are prevalent in a number of applications, such as co-robots, surveillance, and wildlife monitoring. Coordination between robots typically requires communication amongst them. Most multi-robot coordination algorithms implicitly assume that the robots can communicate at all time steps. Communication can be a considerable source of energy consumption, especially for small robots. Furthermore, communicating at all time steps may be redundant in many settings. With this as motivation, we propose an algorithm where the robots do not necessarily communicate at all times and instead choose specific triggering time instances to share information with their neighbors. Despite the limitation of limited communication, we show that the algorithm converges to the optimal configuration both in theory as well as in simulations.
Our work builds on event-triggered and self-triggered communication schemes studied primarily by the controls community @cite_5 @cite_25 . @cite_14 presented both centralized and decentralized event-triggered strategies for the agreement problem in multi-agent systems. They extended the results to a self-triggered communication setting where the robot calculates its next communication time based on the previous one, without monitoring the state error. Nowzari and Cort 'e s @cite_2 proposed a decentralized self-triggered coordination algorithm for the optimal deployment of a group of robots based on spatial partitioning techniques. The synchronous version of this algorithm converges comparatively with an all-time communication strategy.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_25", "@cite_2" ], "mid": [ "1978518835", "2167183308", "1985235885", "2154718413" ], "abstract": [ "In this note, we revisit the problem of scheduling stabilizing control tasks on embedded processors. We start from the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant. This controller has for objective guaranteeing that (control unrelated) software tasks meet their deadlines and that stabilizing control tasks asymptotically stabilize the plant. We investigate a simple event-triggered scheduler based on this feedback paradigm and show how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.", "Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples.", "Recent developments in computer and communication technologies have led to a new type of large-scale resource-constrained wireless embedded control systems. It is desirable in these systems to limit the sensor and control computation and or communication to instances when the system needs attention. However, classical sampled-data control is based on performing sensing and actuation periodically rather than when the system needs attention. This paper provides an introduction to event- and self-triggered control systems where sensing and actuation is performed when needed. Event-triggered control is reactive and generates sensor sampling and control actuation when, for instance, the plant state deviates more than a certain threshold from a desired value. Self-triggered control, on the other hand, is proactive and computes the next sampling or actuation instance ahead of time. The basics of these control strategies are introduced together with a discussion on the differences between state feedback and output feedback for event-triggered control. It is also shown how event- and self-triggered control can be implemented using existing wireless communication technology. Some applications to wireless control in process industry are discussed as well.", "This paper studies a deployment problem for a group of robots where individual agents operate with outdated information about each other's locations. Our objective is to understand to what extent outdated information is still useful and at which point it becomes essential to obtain new, up-to-date information. We propose a self-triggered coordination algorithm based on spatial partitioning techniques with uncertain information. We analyze its correctness in synchronous and asynchronous scenarios, and establish the same convergence guarantees that a synchronous algorithm with perfect information at all times would achieve. The technical approach combines computational geometry, set-valued stability analysis, and event-based systems." ] }
1704.07475
2964222971
We study the problem of reducing the amount of communication in decentralized target tracking. We focus on the scenario, where a team of robots is allowed to move on the boundary of the environment. Their goal is to seek a formation so as to best track a target moving in the interior of the environment. The robots are capable of measuring distances to the target. Decentralized control strategies have been proposed in the past, which guarantees that the robots asymptotically converge to the optimal formation. However, existing methods require that the robots exchange information with their neighbors at all time steps. Instead, we focus on decentralized strategies to reduce the amount of communication among robots. We propose a self-triggered communication strategy that decides when a particular robot should seek up-to-date information from its neighbors and when it is safe to operate with possibly outdated information. We prove that this strategy converges asymptotically to the desired formation when the target is stationary. For the case of a mobile target, we use a decentralized Kalman filter with covariance intersection to share the beliefs of neighboring robots. We evaluate all the approaches through simulations and a proof-of-concept experiment. Note to Practitioners —We study the problem of tracking a target using a team of coordinating robots. Target tracking problems are prevalent in a number of applications, such as co-robots, surveillance, and wildlife monitoring. Coordination between robots typically requires communication amongst them. Most multi-robot coordination algorithms implicitly assume that the robots can communicate at all time steps. Communication can be a considerable source of energy consumption, especially for small robots. Furthermore, communicating at all time steps may be redundant in many settings. With this as motivation, we propose an algorithm where the robots do not necessarily communicate at all times and instead choose specific triggering time instances to share information with their neighbors. Despite the limitation of limited communication, we show that the algorithm converges to the optimal configuration both in theory as well as in simulations.
To the best of our knowledge, our paper is the first to simultaneously handle both robot coordination @cite_22 and target tracking @cite_11 . We focus on applying self-triggered control to reduce the amount of local communication between neighbors.
{ "cite_N": [ "@cite_22", "@cite_11" ], "mid": [ "1862013120", "1530018653" ], "abstract": [ "We present a control framework for achieving encirclement of a target moving in 3D using a multi-robot system. Three variations of a basic control strategy are proposed for different versions of the encirclement problem, and their effectiveness is formally established. An extension ensuring maintenance of a safe inter-robot distance is also discussed. The proposed framework is fully decentralized and only requires local communication among robots; in particular, each robot locally estimates all the relevant global quantities. We validate the proposed strategy through simulations on kinematic point robots and quadrotor UAVs, as well as experiments on differential-drive wheeled mobile robots.", "Target tracking with bearing-only sensors is a challenging problem when the target moves dynamically in complex scenarios. Besides the partial observability of such sensors, they have limited field of views, occlusions can occur, etc. In those cases, cooperative approaches with multiple tracking robots are interesting, but the different sources of uncertain information need to be considered appropriately in order to achieve better estimates. Even though there exist probabilistic filters that can estimate the position of a target dealing with uncertainties, bearing-only measurements bring usually additional problems with initialization and data association. In this paper, we propose a multi-robot triangulation method with a dynamic baseline that can triangulate bearing-only measurements in a probabilistic manner to produce 3D observations. This method is combined with a decentralized stochastic filter and used to tackle those initialization and data association issues. The approach is validated with simulations and field experiments where a team of aerial and ground robots with cameras track a dynamic target." ] }
1704.07398
2609504523
A fundamental question in language learning concerns the role of a speaker's first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism.
Second language reading has been studied using eyetracking, with much of the work focusing on processing of syntactic ambiguities and analysis of specific target word classes such as cognates @cite_10 @cite_16 . In contrast to our work, such studies typically use controlled, rather than free-form sentences. Investigation of global metrics in free-form second language reading was introduced only recently by . This study compared ESL and native reading of a novel by native speakers of Dutch, observing longer sentence reading times, more fixations and shorter saccades in ESL reading. Differently from this study, our work focuses on comparison of reading patterns between different native languages. We also analyze a related, but different metric, namely speed normalized fixation durations on word sequences.
{ "cite_N": [ "@cite_16", "@cite_10" ], "mid": [ "1997990525", "2031520010" ], "abstract": [ "Second language (L2) researchers are becoming more interested in both L2 learners’ knowledge of the target language and how that knowledge is put to use during real-time language processing. Researchers are therefore beginning to see the importance of combining traditional L2 research methods with those that capture the moment-by-moment interpretation of the target language, such as eye-tracking. The major benefi t of the eye-tracking method is that it can tap into real-time (or online) comprehension processes during the uninterrupted processing of the input, and thus, the data can be compared to those elicited by other, more met alinguistic tasks to offer a broader picture of language acquisition and processing. In this article, we present an overview of the eye-tracking technique and illustrate the method with L2 studies that show how eye-tracking data can be used to (a) investigate language-related topics and (b) inform key debates in the fi elds of L2 acquisition and L2 processing.", "When hearing or reading words and sentences in a second language (L2), we face many uncertainties about how the people and objects referred to are connected to one another. So what do we do under these conditions of uncertainty? Because relatively proficient L2 speakers have access to the grammar and lexicon of each language when comprehending words and sentences or when planning spoken utterances, and because the recent research suggests that these linguistic systems are not entirely independent, there is a critical question about how the knowledge of two languages affects basic aspects of language processing. In this article, I review how eye-tracking methodology has been used as a tool to address this question. I begin by discussing why eye movements are a useful methodology in language processing research, and I provide a description of one experimental paradigm developed to explore eye movements during reading. Second, I present recent developments in the use of eye tracking to study L2 spoken-language comprehension. I also highlight the importance of using multiple measures of online sentence processing by discussing results obtained using a moving window task and eye-tracking records while L2 speakers read syntactically ambiguous relative clauses. Next, I discuss research investigating syntactic processing when L2 speakers process mixed language. I end with suggestions for future research directions." ] }
1704.07398
2609504523
A fundamental question in language learning concerns the role of a speaker's first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism.
NLI was first introduced in and has been drawing considerable attention in NLP, including a recent shared-task challenge with 29 participating teams @cite_19 . NLI has also been driving much of the work on identification of native language related features in writing @cite_21 @cite_28 @cite_35 @cite_12 @cite_13 @cite_23 @cite_2 @cite_4 . Several studies have also linked usage patterns and grammatical errors in production to linguistic properties of the writer's native language @cite_33 @cite_0 @cite_14 @cite_32 . Our work departs from NLI in writing and introduces NLI and related feature analysis in reading.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_33", "@cite_28", "@cite_21", "@cite_32", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "", "2949822859", "2135528482", "", "", "2137264748", "2252012503", "2155939044", "", "2169087896", "2155701129", "2251264895", "2251303571" ], "abstract": [ "", "Linguists and psychologists have long been studying cross-linguistic transfer, the influence of native language properties on linguistic performance in a foreign language. In this work we provide empirical evidence for this process in the form of a strong correlation between language similarities derived from structural features in English as Second Language (ESL) texts and equivalent similarities obtained from the typological features of the native languages. We leverage this finding to recover native language typological similarity structure directly from ESL text, and perform prediction of typological features in an unsupervised fashion with respect to the target languages. Our method achieves 72.2 accuracy on the typology prediction task, a result that is highly competitive with equivalent methods that rely on typological resources.", "In this paper, we show that stylistic text features can be exploited to determine an anonymous author's native language with high accuracy. Specifically, we first use automatic tools to ascertain frequencies of various stylistic idiosyncrasies in a text. These frequencies then serve as features for support vector machines that learn to classify texts according to author native language.", "", "", "We apply machine learning techniques to study language transfer, a major topic in the theory of Second Language Acquisition (SLA). Using an SVM for the problem of native language classification, we show that a careful analysis of the effects of various features can lead to scientific insights. In particular, we demonstrate that character bigrams alone allow classification levels of about 66 for a 5-class task, even when content and function word differences are accounted for. This may show that native language has a strong effect on the word choice of people writing in a second language.", "This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate that language specific error distributions in ESL writing can be predicted from the typological properties of the native language and their relation to the typology of English. Our typology driven model enables to obtain accurate estimates of such distributions without access to any ESL data for the target languages. Furthermore, we present a strategy for adjusting our method to low-resource languages that lack typological documentation using a bootstrapping approach which approximates native language typology from ESL texts. Finally, we show that our framework is instrumental for linguistic inquiry seeking to identify first language factors that contribute to a wide range of difficulties in second language acquisition.", "Mother tongue interference is the phenomenon where linguistic systems of a mother tongue are transferred to another language. Recently, Nagata and Whittaker (2013) have shown that language family relationship among mother tongues is preserved in English written by IndoEuropean language speakers because of mother tongue interference. At the same time, their findings further introduce the following two research questions: (1) Does the preservation universally hold in non-native English other than in English of Indo-European language speakers? (2) Is the preservation independent of proficiency in English? In this paper, we address these research questions. We first explore the two research questions empirically by reconstructing language family trees from English texts written by speakers of Asian languages. We then discuss theoretical reasons for the empirical results. We finally introduce another hypothesis called the existence of a probabilistic module to explain why the preservation does or does not hold in particular situations.", "", "Language transfer, the preferential second language behavior caused by similarities to the speaker’s native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.", "Language transfer, the characteristic second language usage patterns caused by native language interference, is investigated by Second Language Acquisition (SLA) researchers seeking to find overused and underused linguistic features. In this paper we develop and present a methodology for deriving ranked lists of such features. Using very large learner data, we show our method’s ability to find relevant candidates using sophisticated linguistic features. To illustrate its applicability to SLA research, we formulate plausible language transfer hypotheses supported by current evidence. This is the first work to extend Native Language Identification to a broader linguistic interpretation of learner data and address the automatic extraction of underused features on a per-native language basis.", "We develop a method for effective extraction of linguistic patterns that are differentially expressed based on the native language of the author. This method uses multiple corpora to allow for the removal of data set specific patterns, and addresses both feature relevancy and redundancy. We evaluate different relevancy ranking metrics and show that common measures of relevancy can be inappropriate for data with many rare features. Our feature set is a broad class of syntactic patterns, and to better capture the signal we extend the Bayesian Tree Substitution Grammar induction algorithm to a supervised mixture of latent grammars. We show that this extension can be used to extract a larger set of relevant features.", "In this paper we present work on the task of Native Language Identification (NLI). We present an alternative corpus to the ICLE which has been used in most work up until now. We believe that our corpus, TOEFL11, is more suitable for the task of NLI and will allow researchers to better compare systems and results. We show that many of the features that have been commonly used in this task generalize to new and larger corpora. In addition, we examine possible ways of increasing current system performance (e.g., additional features and feature combination methods), and achieve overall state-of-the-art results (accuracy of 90.1 ) on the ICLE corpus using an ensemble classifier that includes previously examined features and a novel feature (n-gram language models). We also show that training on a large corpus and testing on a smaller one works well, but not vice versa. Finally, we show that system performance varies across proficiency scores." ] }
1704.07497
2609883564
In this paper, we consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set P of n (weighted) demand points, and the location of each demand point P_i P is uncertain but is known to appear in one of m_i points on T each associated with a probability. Given a covering range , the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point P_i P, the expected distance from P_i to at least one center is no more than @math . The problem has not been studied before. We present an O(|T|+M ^2 M) time algorithm for the problem, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of P, i.e., M= P_i P m_i. In addition, by using this algorithm, we solve a k-center problem on T for the uncertain points of P.
Two models on uncertain data have been commonly considered: the model @cite_8 @cite_23 @cite_17 @cite_27 @cite_34 @cite_28 and the model @cite_35 @cite_13 @cite_33 @cite_22 . In the existential model an uncertain point has a specific location but its existence is uncertain while in the locational model an uncertain point always exists but its location is uncertain and follows a probability distribution function. Our problems belong to the locational model. In fact, the same problems under existential model are essentially the weighted case for deterministic'' points (i.e., each @math has a single certain'' location), and the center-coverage problem is solvable in linear time @cite_19 and the @math -center problem is solvable in @math time @cite_21 @cite_12 .
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_22", "@cite_8", "@cite_28", "@cite_21", "@cite_19", "@cite_27", "@cite_23", "@cite_34", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2169275758", "2142909274", "2042502381", "2951335702", "2118291925", "1671942308", "2010961994", "2077178480", "", "581837698", "2950563475", "2032624334", "2036686829" ], "abstract": [ "Querying uncertain data has emerged as an important problem in data management due to the imprecise nature of many measurement data. In this paper we study answering range queries over uncertain data. Specifically, we are given a collection P of n points in R, each represented by its one-dimensional probability density function (pdf). The goal is to build an index on P such that given a query interval I and a probability threshold τ, we can quickly report all points of P that lie in I with probability at least τ. We present various indexing schemes with linear or near-linear space and logarithmic query time. Our schemes support pdf's that are either histograms or more complex ones such as Gaussian or piecewise algebraic. They also extend to the external memory model in which the goal is to minimize the number of disk accesses when querying the index.", "It is infeasible for a sensor database to contain the exact value of each sensor at all points in time. This uncertainty is inherent in these systems due to measurement and sampling errors, and resource limitations. In order to avoid drawing erroneous conclusions based upon stale data, the use of uncertainty intervals that model each data item as a range and associated probability density function (pdf) rather than a single value has recently been proposed. Querying these uncertain data introduces imprecision into answers, in the form of probability values that specify the likeliness the answer satisfies the query. These queries are more expensive to evaluate than their traditional counterparts but are guaranteed to be correct and more informative due to the probabilities accompanying the answers. Although the answer probabilities are useful, for many applications, it is only necessary to know whether the probability exceeds a given threshold - we term these Probabilistic Threshold Queries (PTQ). In this paper we address the efficient computation of these types of queries. In particular, we develop two index structures and associated algorithms to efficiently answer PTQs. The first index scheme is based on the idea of augmenting uncertainty information to an R-tree. We establish the difficulty of this problem by mapping one-dimensional intervals to a two-dimensional space, and show that the problem of interval indexing with probabilities is significantly harder than interval indexing which is considered a well-studied problem. To overcome the limitations of this R-tree based structure, we apply a technique we call variance-based clustering, where data points with similar degrees of uncertainty are clustered together. Our extensive index structure can answer the queries for various kinds of uncertainty pdfs, in an almost optimal sense. We conduct experiments to validate the superior performance of both indexing schemes.", "In an uncertain database, every object o is associated with a probability density function, which describes the likelihood that o appears at each position in a multidimensional workspace. This article studies two types of range retrieval fundamental to many analytical tasks. Specifically, a nonfuzzy query returns all the objects that appear in a search region rq with at least a certain probability tq. On the other hand, given an uncertain object q, fuzzy search retrieves the set of objects that are within distance eq from q with no less than probability tq. The core of our methodology is a novel concept of “probabilistically constrained rectangle”, which permits effective pruning validation of nonqualifying qualifying data. We develop a new index structure called the U-tree for minimizing the query overhead. Our algorithmic findings are accompanied with a thorough theoretical analysis, which reveals valuable insight into the problem characteristics, and mathematically confirms the efficiency of our solutions. We verify the effectiveness of the proposed techniques with extensive experiments.", "We study the convex-hull problem in a probabilistic setting, motivated by the need to handle data uncertainty inherent in many applications, including sensor databases, location-based services and computer vision. In our framework, the uncertainty of each input site is described by a probability distribution over a finite number of possible locations including a location to account for non-existence of the point. Our results include both exact and approximation algorithms for computing the probability of a query point lying inside the convex hull of the input, time-space tradeoffs for the membership queries, a connection between Tukey depth and membership queries, as well as a new notion of @math -hull that may be a useful representation of uncertain hulls.", "We study the problem of answering spatial queries in databases where objects exist with some uncertainty and they are associated with an existential probability. The goal of a thresholding probabilistic spatial query is to retrieve the objects that qualify the spatial predicates with probability that exceeds a threshold. Accordingly, a ranking probabilistic spatial query selects the objects with the highest probabilities to qualify the spatial predicates. We propose adaptations of spatial access methods and search algorithms for probabilistic versions of range queries, nearest neighbors, spatial skylines, and reverse nearest neighbors and conduct an extensive experimental study, which evaluates the effectiveness of proposed solutions.", "Megiddo introduced a technique for using a parallel algorithm for one problem to construct an efficient serial algorithm for a second problem. We give a general method that trims a factor o f 0(logn) time (or more) for many applications of this technique.", "Problems of finding p-centers and dominating sets of radius r in networks are discussed in this paper. Let n be the number of vertices and @math be the number of edges of a network. With the assumption that the distance-matrix of the network is available, we design an @math algorithm for finding an absolute 1-center of a vertex-weighted network and an @math algorithm for finding an absolute 1-center of a vertex-unweighted network (the problem of finding a vertex 1-center of a network is trivial). We show that the problem of finding a (vertex or absolute) p-center (for @math ) of a (vertex-weighted or vertex-unweighted) network, and the problem of finding a dominating set of radius r are @math -hard even in the case where the network has a simple structure (e.g., a planar graph of maximum vertex degree 3). However, we describe an algorithm of complexity @math (respectively, $ O[| E |^p n^ 2p - 1...", "We consider the problem of nearest-neighbor searching among a set of stochastic sites, where a stochastic site is a tuple ((s_i, _i) ) consisting of a point (s_i ) in a (d )-dimensional space and a probability ( _i ) determining its existence. The problem is interesting and non-trivial even in (1 )-dimension, where the Most Likely Voronoi Diagram (LVD) is shown to have worst-case complexity ( (n^2) ). We then show that under more natural and less adversarial conditions, the size of the (1 )-dimensional LVD is significantly smaller: (1) ( (k n) ) if the input has only (k ) distinct probability values, (2) (O(n n) ) on average, and (3) (O(n n ) ) under smoothed analysis. We also present an alternative approach to the most likely nearest neighbor (LNN) search using Pareto sets, which gives a linear-space data structure and sub-linear query time in 1D for average and smoothed analysis models, as well as worst-case with a bounded number of distinct probabilities. Using the Pareto-set approach, we can also reduce the multi-dimensional LNN search to a sequence of nearest neighbor and spherical range queries.", "", "Consider a set of points in d dimensions where the existence or the location of each point is determined by a probability distribution. The convex hull of this set is a random variable distributed over exponentially many choices. We are interested in finding the most likely convex hull, namely, the one with the maximum probability of occurrence. We investigate this problem under two natural models of uncertainty: the point (also called the tuple) model where each point (site) has a fixed position s i but only exists with some probability π i , for 0 < π i ≤ 1, and the multipoint model where each point has multiple possible locations or it may not appear at all. We show that the most likely hull under the point model can be computed in O(n 3) time for n points in d = 2 dimensions, but it is NP–hard for d ≥ 3 dimensions. On the other hand, we show that the problem is NP–hard under the multipoint model even for d = 2 dimensions. We also present hardness results for approximating the probability of the most likely hull. While we focus on the most likely hull for concreteness, our results hold for other natural definitions of a probabilistic hull.", "Nearest-neighbor search, which returns the nearest neighbor of a query point in a set of points, is an important and widely studied problem in many fields, and it has wide range of applications. In many of them, such as sensor databases, location-based services, face recognition, and mobile data, the location of data is imprecise. We therefore study nearest-neighbor queries in a probabilistic framework in which the location of each input point is specified as a probability distribution function. We present efficient algorithms for - computing all points that are nearest neighbors of a query point with nonzero probability; and - estimating the probability of a point being the nearest neighbor of a query point, either exactly or within a specified additive error.", "An @math algorithm for the continuous p-center problem on a tree is presented. Following a sequence of previous algorithms, ours is the first one whose time bound in uniform in p and less than quadratic in n. We also present an @math algorithm for a weighted discrete p-center problem.", "We study the complexity of geometric minimum spanning trees under a stochastic model of input: Suppose we are given a master set of points s1,s_2,...,sn in d-dimensional Euclidean space, where each point si is active with some independent and arbitrary but known probability pi. We want to compute the expected length of the minimum spanning tree (MST) of the active points. This particular form of stochastic problems is motivated by the uncertainty inherent in many sources of geometric data but has not been investigated before in computational geometry to the best of our knowledge. Our main results include the following. We show that the stochastic MST problem is SPHARD for any dimension d ≥ 2. We present a simple fully polynomial randomized approximation scheme (FPRAS) for a metric space, and thus also for any Euclidean space. For d=2, we present two deterministic approximation algorithms: an O(n4)-time constant-factor algorithm, and a PTAS based on a combination of shifted quadtrees and dynamic programming. We show that in a general metric space the tail bounds of the distribution of the MST length cannot be approximated to any multiplicative factor in polynomial time under the assumption that P ≠ NP. In addition to this existential model of stochastic input, we also briefly consider a locational model where each point is present with certainty but its location is probabilistic." ] }
1704.07497
2609883564
In this paper, we consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set P of n (weighted) demand points, and the location of each demand point P_i P is uncertain but is known to appear in one of m_i points on T each associated with a probability. Given a covering range , the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point P_i P, the expected distance from P_i to at least one center is no more than @math . The problem has not been studied before. We present an O(|T|+M ^2 M) time algorithm for the problem, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of P, i.e., M= P_i P m_i. In addition, by using this algorithm, we solve a k-center problem on T for the uncertain points of P.
If @math is a path, both the center-coverage problem and the @math -center problem on uncertain points have been studied @cite_20 , but under a somewhat special problem setting where @math is the same for all @math . The two problems were solved in @math and @math time, respectively. If @math is tree, an @math time algorithm was given in @cite_5 for the one-center problem under the above special problem setting.
{ "cite_N": [ "@cite_5", "@cite_20" ], "mid": [ "2345659331", "1628490320" ], "abstract": [ "Uncertain data has been very common in many applications. In this paper, we consider the one-center problem for uncertain data on tree networks. In this problem, we are given a tree T and n (weighted) uncertain points each of which has m possible locations on T associated with probabilities. The goal is to find a point @math xź on T such that the maximum (weighted) expected distance from @math xź to all uncertain points is minimized. To the best of our knowledge, this problem has not been studied before. We propose a refined prune-and-search technique that solves the problem in linear time.", "Problems on uncertain data have attracted significant attention due to the imprecise nature of many measurement data. In this paper, we consider the k-center problem on one-dimensional uncertain data. The input is a set P of (weighted) uncertain points on a real line, and each uncertain point is specified by its probability density function (pdf) which is a piecewise-uniform function (i.e., a histogram). The goal is to find a set Q of k points on the line to minimize the maximum expected distance from the uncertain points of P to their expected closest points in Q. We present efficient algorithms for this uncertain k-center problem and their running times almost match those for the \"deterministic\" k-center problem." ] }
1704.07497
2609883564
In this paper, we consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set P of n (weighted) demand points, and the location of each demand point P_i P is uncertain but is known to appear in one of m_i points on T each associated with a probability. Given a covering range , the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point P_i P, the expected distance from P_i to at least one center is no more than @math . The problem has not been studied before. We present an O(|T|+M ^2 M) time algorithm for the problem, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of P, i.e., M= P_i P m_i. In addition, by using this algorithm, we solve a k-center problem on T for the uncertain points of P.
As mentioned above, the deterministic'' version of the center-coverage problem is solvable in linear time @cite_19 , where all demand points are on the vertices. For the @math -center problem, Megiddo and Tamir @cite_12 presented an @math time algorithm ( @math is the size of the tree), which was improved to @math time by Cole @cite_21 . The unweighted case was solved in linear time by Frederickson @cite_11 .
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_12", "@cite_11" ], "mid": [ "2010961994", "1671942308", "2032624334", "1529668588" ], "abstract": [ "Problems of finding p-centers and dominating sets of radius r in networks are discussed in this paper. Let n be the number of vertices and @math be the number of edges of a network. With the assumption that the distance-matrix of the network is available, we design an @math algorithm for finding an absolute 1-center of a vertex-weighted network and an @math algorithm for finding an absolute 1-center of a vertex-unweighted network (the problem of finding a vertex 1-center of a network is trivial). We show that the problem of finding a (vertex or absolute) p-center (for @math ) of a (vertex-weighted or vertex-unweighted) network, and the problem of finding a dominating set of radius r are @math -hard even in the case where the network has a simple structure (e.g., a planar graph of maximum vertex degree 3). However, we describe an algorithm of complexity @math (respectively, $ O[| E |^p n^ 2p - 1...", "Megiddo introduced a technique for using a parallel algorithm for one problem to construct an efficient serial algorithm for a second problem. We give a general method that trims a factor o f 0(logn) time (or more) for many applications of this technique.", "An @math algorithm for the continuous p-center problem on a tree is presented. Following a sequence of previous algorithms, ours is the first one whose time bound in uniform in p and less than quadratic in n. We also present an @math algorithm for a weighted discrete p-center problem.", "Linear-time and -space algorithms are presented for solving three versions of the p-center problem in a tree. The techniques are an application of parametric search." ] }
1704.07497
2609883564
In this paper, we consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set P of n (weighted) demand points, and the location of each demand point P_i P is uncertain but is known to appear in one of m_i points on T each associated with a probability. Given a covering range , the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point P_i P, the expected distance from P_i to at least one center is no more than @math . The problem has not been studied before. We present an O(|T|+M ^2 M) time algorithm for the problem, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of P, i.e., M= P_i P m_i. In addition, by using this algorithm, we solve a k-center problem on T for the uncertain points of P.
Very recently, Li and Huang @cite_6 considered the same @math -center problem under the same uncertain model as ours but in the Euclidean space, and they gave an approximation algorithm. Facility location problems in other uncertain models have also been considered. For example, L "o ffler and van Kreveld @cite_39 gave algorithms for computing the smallest enclosing circle for imprecise points each of which is contained in a planar region (e.g., a circle or a square). J @cite_31 studied the problem of computing the distribution of the radius of the smallest enclosing circle for uncertain points each of which has multiple locations in the plane. de @cite_0 proposed algorithms for dynamically maintaining Euclidean @math -centers for a set of moving points in the plane (the moving points are considered uncertain). See also the problems for minimizing the maximum regret, e.g., @cite_7 @cite_9 @cite_36 .
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_9", "@cite_6", "@cite_39", "@cite_0", "@cite_31" ], "mid": [ "2030028156", "1982820819", "2018274124", "", "2007553965", "2009033070", "1547243772" ], "abstract": [ "We consider single facility location problems (1-median and weighted 1-center) on a plane with uncertain weights and coordinates of customers (demand points). Specifically, for each customer, only interval estimates for its weight and coordinates are known. It is required to find a ''minmax regret'' location, i.e. to minimize the worst-case loss in the objective function value that may occur because the decision is made without knowing the exact values of customers' weights and coordinates that will get realized. We present an O(n^2log^2n) algorithm for the interval data minmax regret rectilinear 1-median problem and an O(nlogn) algorithm for the interval data minmax regret rectilinear weighted 1-center problem. For the case of Euclidean distances, we consider uncertainty only in customers' weights. We discuss possibilities of solving approximately the minmax regret Euclidean 1-median problem, and present an O(n^22^@a^(^n^)log^2n) algorithm for solving the minmax regret Euclidean weighted 1-center problem, where @a(n) is the inverse Ackermann function.", "Let P be an undirected path graph of n vertices. Each edge of P has a positive length and a constant capacity. Every vertex has a nonnegative supply, which is an unknown value but is known to be in a given interval. The goal is to find a point on P to build a facility and move all vertex supplies to the facility such that the maximum regret is minimized. The previous best algorithm solves the problem in O(nlog2n) time and O(nlogn) space. In this paper, we present an O(nlogn) time and O(n) space algorithm, and our approach is based on new observations and algorithmic techniques.", "Abstract We consider the weighted p-center problem on a transportation network with uncertain weights of nodes. Specifically, for each node, an interval estimate of its weight is known. The objective is to find the ‘minimax regret’ solution i.e. to minimize the worst-case loss in the objective function that may occur because a decision is made without knowing which state of nature will take place. We discuss properties of the problem and show that the problem can be solved by means of solving (n + 1) regular weighted p-center problems. This leads to polynomial algorithms for the cases where the regular weighted p-center problem can be solved in polynomial time, e.g. for the case of a tree network, and for the case of a general network with p = 1.", "", "Imprecision of input data is one of the main obstacles that prevent geometric algorithms from being used in practice. We model an imprecise point by a region in which the point must lie. Given a set of imprecise points, we study computing the largest and smallest possible values of various basic geometric measures on point sets, such as the diameter, width, closest pair, smallest enclosing circle, and smallest enclosing bounding box. We give efficient algorithms for most of these problems, and identify the hardness of others.", "We study two versions of the 2-center problem for moving points in the plane. Given a set P of n points, the Euclidean 2-center problem asks for two congruent disks of minimum size that together cover P; the rectilinear 2-center problem correspondingly asks for two congruent axis-aligned squares of minimum size that together cover P. Our methods work in the black-box KDS model, where we receive the locations of the points at regular time steps and we know an upper bound d_ max on the maximum displacement of any point within one time step. We show how to maintain the rectilinear 2-center in amortized sub-linear time per time step, under certain assumptions on the distribution of the point set P. For the Euclidean 2-center we give a similar result: we can maintain in amortized sub-linear time (again under certain assumptions on the distribution) a (1+e)-approximation of the optimal 2-center. In many cases---namely when the distance between the centers of the disks is relatively large or relatively small---the solution we maintain is actually optimal. We also present results for the case where the maximum speed of the centers is bounded. We describe a simple scheme to maintain a 2-approximation of the rectilinear 2-center, and we provide a scheme which gives a better approximation factor depending on several parameters of the point set and the maximum allowed displacement of the centers. The latter result can be used to obtain a 2.29-approximation for the Euclidean 2-center; this is an improvement over the previously best known bound of 8 π approx 2.55. These algorithms run in amortized sub-linear time per time step, as before under certain assumptions on the distribution.", "We study computing with indecisive point sets. Such points have spatial uncertainty where the true location is one of a finite number of possible locations. This data arises from probing distributions a few times or when the location is one of a few locations from a known database. In particular, we study computing distributions of geometric functions such as the radius of the smallest enclosing ball and the diameter. Surprisingly, we can compute the distribution of the radius of the smallest enclosing ball exactly in polynomial time, but computing the same distribution for the diameter is #P-hard. We generalize our polynomial-time algorithm to all LP-type problems. We also utilize our indecisive framework to deterministically and approximately compute on a more general class of uncertain data where the location of each point is given by a probability distribution." ] }
1704.07497
2609883564
In this paper, we consider a coverage problem for uncertain points in a tree. Let T be a tree containing a set P of n (weighted) demand points, and the location of each demand point P_i P is uncertain but is known to appear in one of m_i points on T each associated with a probability. Given a covering range , the problem is to find a minimum number of points (called centers) on T to build facilities for serving (or covering) these demand points in the sense that for each uncertain point P_i P, the expected distance from P_i to at least one center is no more than @math . The problem has not been studied before. We present an O(|T|+M ^2 M) time algorithm for the problem, where |T| is the number of vertices of T and M is the total number of locations of all uncertain points of P, i.e., M= P_i P m_i. In addition, by using this algorithm, we solve a k-center problem on T for the uncertain points of P.
Some coverage problems in various geometric settings have also been studied. For example, the unit disk coverage problem is to compute a minimum number of unit disks to cover a given set of points in the plane. The problem is NP-hard and a polynomial-time approximation scheme was known @cite_38 . The discrete case where the disks must be selected from a given set was also studied @cite_30 . See @cite_25 @cite_14 @cite_40 @cite_2 and the references therein for various problems of covering points using squares. Refer to a survey @cite_10 for more geometric coverage problems.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_14", "@cite_40", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2054505111", "2100961486", "2144614313", "1981309786", "2024500486", "", "" ], "abstract": [ "We consider the problem of computing minimum geometric hitting sets in which, given a set of geometric objects and a set of points, the goal is to compute the smallest subset of points that hit all geometric objects. The problem is known to be strongly NP-hard even for simple geometric objects like unit disks in the plane. Therefore, unless P=NP, it is not possible to get Fully Polynomial Time Approximation Algorithms (FPTAS) for such problems. We give the first PTAS for this problem when the geometric objects are half-spaces in Re3 and when they are an r-admissible set regions in the plane (this includes pseudo-disks as they are 2-admissible). Quite surprisingly, our algorithm is a very simple local search algorithm which iterates over local improvements only.", "A unified and powerful approach is presented for devising polynomial approximation schemes for many strongly NP-complete problems. Such schemes consist of families of approximation algorithms for each desired performance bound on the relative error e > O, with running time that is polynomial when e is fixed. Though the polynomiality of these algorithms depends on the degree of approximation e being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly NP-complete problems unless NP = P. The unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. The method of using the technique and how it varies with problem parameters are illustrated. A similar technique, independently devised by B. S. Baker, was shown to be applicable for covering and packing problems on planar graphs.", "We study a geometric version of the Red-Blue Set Cover problem originally proposed by (2000) 1: given a red point set, a blue point set, and a set of objects, we want to choose a subset of objects to cover all the blue points, while minimizing the number of red points covered. We prove that the problem is NP-hard even when the objects are unit squares in 2D, and we give the first polynomial-time approximation scheme (PTAS) for this case. The technique we use simplifies and unifies previous PTASs for the weighted geometric set cover problem and the unique maximum coverage problem for 2D unit squares.", "Let P = p1, p2,…,pn be a set of points in d-space. We study the problem of covering with the minimum number of fixed-size orthogonal hypersquares (CSd for short) all points in P. We present a fast approximation algorithm that generates provably good solutions and an improved polynomial-time approximation scheme for this problem. A variation of the CSd problem is the CRd problem, covering by fixed-size orthogonal hyperrectangles, where the covering of the points is by hyperrectangles with dimensions D1, D2,…,Dd instead of hypersquares of size D. Another variation is the CDd problem, where we cover the set of points with hyperdiscs of diameter D. Our algorithms can be easily adapted to these problems.", "Given a set S of n points in the plane, the disjoint two-rectangle covering problem is to find a pair of disjoint rectangles such that their union contains S and the area of the larger rectangle is minimized. In this paper we consider two variants of this optimization problem: (1) the rectangles are allowed to be reoriented freely while restricting them to be parallel to each other, and (2) one rectangle is restricted to be axis-parallel but the other rectangle is allowed to be reoriented freely. For both of the problems, we present O(n2log n)-time algorithms using O(n) space.", "", "" ] }
1704.07554
2608001606
The current work characterizes the users of a VoD streaming space through user-personas based on a tenure timeline and temporal behavioral features in the absence of explicit user profiles. A combination of tenure timeline and temporal characteristics caters to business needs of understanding the evolution and phases of user behavior as their accounts age. The personas constructed in this work successfully represent both dominant and niche characterizations while providing insightful maturation of user behavior in the system. The two major highlights of our personas are demonstration of stability along tenure timelines on a population level, while exhibiting interesting migrations between labels on an individual granularity and clear interpretability of user labels. Finally, we show a trade-off between an indispensable trio of guarantees, relevance-scalability-interpretability by using summary information from personas in a CTR (Click through rate) predictive model. The proposed method of uncovering latent personas, consequent insights from these and application of information from personas to predictive models are broadly applicable to other streaming based products.
One of the key features of our personas is that they exhibit stability on a population level even as migrations on an individual level are constantly taking place along the chosen time granularity. Clusters not shifting dramatically from one time-step to the next is also explored in @cite_16 and the stability of clusters finds similarity in equilibrium of average network properties in @cite_21 . The personas possess a natural divisive structure as opposed to imposed hierarchy via clusterings discussed in @cite_19 , @cite_11 amongst many others.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_21", "@cite_11" ], "mid": [ "1999349314", "", "2049607688", "1998871699" ], "abstract": [ "Clustering is a widely used technique in data mining applications to discover patterns in the underlying data. Most traditional clustering algorithms are limited to handling datasets that contain either continuous or categorical attributes. However, datasets with mixed types of attributes are common in real life data mining problems. In this paper, we propose a distance measure that enables clustering data with both continuous and categorical attributes. This distance measure is derived from a probabilistic model that the distance between two clusters is equivalent to the decrease in log-likelihood function as a result of merging. Calculation of this measure is memory efficient as it depends only on the merging cluster pair and not on all the other clusters. [8] proposed a clustering method named BIRCH that is especially suitable for very large datasets. We develop a clustering algorithm using our distance measure based on the framework of BIRCH. Similar to BIRCH, our algorithm first performs a pre-clustering step by scanning the entire dataset and storing the dense regions of data records in terms of summary statistics. A hierarchical clustering algorithm is then applied to cluster the dense regions. Apart from the ability of handling mixed type of attributes, our algorithm differs from BIRCH in that we add a procedure that enables the algorithm to automatically determine the appropriate number of clusters and a new strategy of assigning cluster membership to noisy data. For data with mixed type of attributes, our experimental results confirm that the algorithm not only generates better quality clusters than the traditional k-means algorithms, but also exhibits good scalability properties and is able to identify the underlying number of clusters in the data correctly. The algorithm is implemented in the commercial data mining tool Clementine 6.0 which supports the PMML standard of data mining model deployment.", "", "Social networks evolve over time, driven by the shared activities and affiliations of their members, by similarity of individuals' attributes, and by the closure of short network cycles. We analyzed a dynamic social network comprising 43,553 students, faculty, and staff at a large university, in which interactions between individuals are inferred from time-stamped e-mail headers recorded over one academic year and are matched with affiliations and attributes. We found that network evolution is dominated by a combination of effects arising from network topology itself and the organizational structure in which the network is embedded. In the absence of global perturbations, average network properties appear to approach an equilibrium state, whereas individual properties are unstable.", "Techniques for partitioning objects into optimally homogeneous groups on the basis of empirical measures of similarity among those objects have received increasing attention in several different fields. This paper develops a useful correspondence between any hierarchical system of such clusters, and a particular type of distance measure. The correspondence gives rise to two methods of clustering that are computationally rapid and invariant under monotonic transformations of the data. In an explicitly defined sense, one method forms clusters that are optimally “connected,” while the other forms clusters that are optimally “compact.”" ] }
1704.07754
2950480653
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
Image Semantic Segmentation. Various deep methods have been developed and achieve significant progress in image segmentation @cite_17 @cite_5 @cite_1 @cite_2 . These methods use convolution neural network (CNN) to extract deep representations and up-sample the low-resolution feature maps to produce the dense prediction results. SegNet @cite_8 adopts an encoder-decoder structure to further improve the performance while requiring fewer model parameters. We adopt the encoder-decoder structure for 3D biomedical segmentation and further incorporate cross-modality convolution and convolutional LSTM to better exploit the multi-modal data and sequential information for consecutive slices.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "360623563", "1923697677", "2952637581", "", "2952632681" ], "abstract": [ "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.", "", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." ] }
1704.07754
2950480653
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
3D Biomedical Image Segmentation. There have been much research work that adopts deep methods for biomedical segmentation. @cite_10 split 3D MRI data into 2D slices and crop small patches at 2D planes. They combine the results from different-sized patches and stack multiple modalities as different channels for the label prediction. Some methods utilize full convolutional network (FCN) structure @cite_17 for 3D biomedical image segmentation. U-Net @cite_18 consists of a contracting path that contains multiple convolutions for downsampling, and a expansive path that has several deconvolution layers to up-sample the features and concatenate the cropped feature maps from the contracting path. However, depth information is ignored by these 2D-based approaches.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_17" ], "mid": [ "2952232639", "1884191083", "2952632681" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." ] }
1704.07754
2950480653
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
To better use the depth information, @cite_9 utilize 3D convolution to model the correlations between slices. However, 3D convolution network often requires a larger number of parameters and is prone to overfitting on small dataset. kU-Net @cite_20 is the most related to our work. They adopt U-Net as their encoder and decoder and use recurrent neural network (RNN) to capture the temporal information. Different from kU-Net, we further propose a cross-modality convolution to better combine the information from multi-modal MRI data, and jointly optimize the slice sequence learning and cross-modality convolution in an end-to-end manner.
{ "cite_N": [ "@cite_9", "@cite_20" ], "mid": [ "274818618", "2963122731" ], "abstract": [ "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3-dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer's Disease. We found that a slightly unconventional \"stacked 2D\" approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular \"tri-planar\" approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement.", "Segmentation of 3D images is a fundamental problem in biomedical image analysis. Deep learning (DL) approaches have achieved the state-of-the-art segmentation performance. To exploit the 3D contexts using neural networks, known DL segmentation methods, including 3D convolution, 2D convolution on the planes orthogonal to 2D slices, and LSTM in multiple directions, all suffer incompatibility with the highly anisotropic dimensions in common 3D biomedical images. In this paper, we propose a new DL framework for 3D image segmentation, based on a combination of a fully convolutional network (FCN) and a recurrent neural network (RNN), which are responsible for exploiting the intra-slice and inter-slice contexts, respectively. To our best knowledge, this is the first DL framework for 3D image segmentation that explicitly leverages 3D image anisotropism. Evaluating using a dataset from the ISBI Neuronal Structure Segmentation Challenge and in-house image stacks for 3D fungus segmentation, our approach achieves promising results, comparing to the known DL-based 3D segmentation approaches." ] }
1704.07754
2950480653
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
Multi-Modal Images. In brain tumor segmentation, multi-modal images are used to identify the boundaries between the tumor, edema and normal brain tissue. @cite_6 combine MRI images with diffusion tensor imaging data to create an integrated multi-modality profile for brain tumors. Their brain tissue classification framework incorporates intensities from each modality into an appearance signature of each voxel to train the classifiers. @cite_21 propose a generative probabilistic model for reflecting the differences in tumor appearance across different modalities. In the process of manual segmentation of a brain tumor, different modalities are often cross-checked to better distinguish the different types of brain tissue. For example, according to @cite_12 , the edema is segmented primarily from T2 images and FLAIR is used to cross-check the extension of the edema. Also, enhancing and non-enhancing structures are segmented by evaluating the hyper-intensities in T1C.
{ "cite_N": [ "@cite_21", "@cite_12", "@cite_6" ], "mid": [ "1597605992", "1641498739", "2145183917" ], "abstract": [ "We introduce a generative probabilistic model for segmentation of tumors in multi-dimensional images. The model allows for different tumor boundaries in each channel, reflecting difference in tumor appearance across modalities.We augment a probabilistic atlas of healthy tissue priors with a latent atlas of the lesion and derive the estimation algorithm to extract tumor boundaries and the latent atlas from the image data. We present experiments on 25 glioma patient data sets, demonstrating significant improvement over the traditional multivariate tumor segmentation.", "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74 –85 ), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.", "In this paper, multi-modal magnetic resonance (MR) images are integrated into a tissue profile that aims at differentiating tumor components, edema and normal tissue. This is achieved by a tissue classification technique that learns the appearance models of different tissue types based on training samples identified by an expert and assigns tissue labels to each voxel. These tissue classifiers produce probabilistic tissue maps reflecting imaging characteristics of tumors and surrounding tissues that may be employed to aid in diagnosis, tumor boundary delineation, surgery and treatment planning. The main contributions of this work are: 1) conventional structural MR modalities are combined with diffusion tensor imaging data to create an integrated multimodality profile for brain tumors, and 2) in addition to the tumor components of enhancing and non-enhancing tumor types, edema is also characterized as a separate class in our framework. Classification performance is tested on 22 diverse tumor cases using cross-validation." ] }
1704.07754
2950480653
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.
Existing CNN-based methods (e.g., @cite_3 @cite_10 ) often treat modalities as different channels in the input data. However, the correlations between them are not well utilized. To our best knowledge, we are the first to jointly exploit the correlations between different modalities, and the spatial and sequential dependencies for consecutive slices.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "1884191083", "2310992461" ], "abstract": [ "Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.", "Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 @math 3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively." ] }
1704.07657
2608208138
Various modifications of decision trees have been extensively used during the past years due to their high efficiency and interpretability. Tree node splitting based on relevant feature selection is a key step of decision tree learning, at the same time being their major shortcoming: the recursive nodes partitioning leads to geometric reduction of data quantity in the leaf nodes, which causes an excessive model complexity and data overfitting. In this paper, we present a novel architecture - a Decision Stream, - aimed to overcome this problem. Instead of building a tree structure during the learning process, we propose merging nodes from different branches based on their similarity that is estimated with two-sample test statistics, which leads to generation of a deep directed acyclic graph of decision rules that can consist of hundreds of levels. To evaluate the proposed solution, we test it on several common machine learning problems - credit scoring, twitter sentiment analysis, aircraft flight control, MNIST and CIFAR image classification, synthetic data classification and regression. Our experimental results reveal that the proposed approach significantly outperforms the standard decision tree learning methods on both regression and classification tasks, yielding a prediction error decrease up to 35 .
A fundamentally different approach based on Occam's razor concept was proposed for decision tree size reduction in @cite_14 , where decision graph is constructed on the basis of hill climbing heuristic by merging nodes from adjacent levels according to minimum message length principle with goal to produce a model of minimum size while preserving increasing its accuracy. This technique has demonstrated an advantage over standard decision trees in experiments @cite_1 -- @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_2" ], "mid": [ "", "2114154044", "2098458263" ], "abstract": [ "", "We report improvements to HOODG, a supervised learning algorithm that induces concepts from labelled instances using oblivious, read-once decision graphs as the underlying hypothesis representation structure. While it is shown that the greedy approach to variable ordering is locally optimal, we also show an inherent limitation of all bottom-up induction algorithms, including HOODG, that construct such decision graphs bottom-up by minimizing the width of levels in the resulting graph. We report our empirical experiments that demonstrate the algorithm's generalization power.", "Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization." ] }
1704.07422
2608886898
High spatial resolution is a central aspect of tomographic imaging. In photoacoustic tomography, this requires accurate models for acoustic wave propagation and corresponding efficient image reconstruction methods. In this article we consider such models accounting for frequency dependent attenuation according to a wide class of attenuation laws that may include memory. We formulate the inverse problem of photoacoustic tomography in attenuating medium as ill-posed operator equation in a Hilbert space framework that is tackled by iterative regularization methods. For that purpose we derive explicit expressions for the adjoint problem that can efficiently be implemented. Our approach comes with a clear convergence analysis that is not shared by existing approaches for the considered model class. In contrast to time reversal the employed adjoint wave equation is again damping and thus has a stable solution. This stability property can be clearly seen in our numerical results. Indeed, if the attenuation is not too strong then the reconstruction results are even better than in the absence of attenuation. Moreover, the presented numerical results clearly demonstrate the efficiency and accuracy of the derived iterative reconstruction algorithms in various situations including the limited view case, where alternative methods cannot be directly applied.
The case of non-vanishing attenuation is much less investigated and existing methods are very different from our approach. One class of reconstruction methods uses the following two-stage procedure: In a first step, by solving an ill-posed integral equation the (idealized) un-attenuated pressure data @math are estimated from the attenuated data @math . In the second step, the standard PAT problem is solved. Such a two step method has been proposed and implemented for the power law in @cite_63 @cite_13 , and later been used in @cite_55 @cite_64 for various attenuation laws. Compared to two stage approaches, in the single step approach it is easier to include prior information available in the image domain, such as positivity of the PAT source (compare Section ). Furthermore, in the limited data case, where the measurement surface does not fully enclose the PA source, the second step in the two-stage approach is again a non-standard problem for PAT, for which iterative methods can be applied. In such a situation it seems reasonable to directly apply iterative methods to the attenuated data, as considered in the present paper.
{ "cite_N": [ "@cite_55", "@cite_64", "@cite_13", "@cite_63" ], "mid": [ "2157048271", "", "2122060892", "2143055960" ], "abstract": [ "Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2–3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical implementation, and a range of clinical and preclinical applications are reviewed.", "", "Conventional image reconstruction methods for optoacoustic tomography (OAT) assume an idealized, nondispersive acoustic medium. However, the linear attenuation coefficient and the phase velocity of acoustic waves propagating in soft tissue depend on temporal frequency and satisfy a known dispersion law. These frequency-dependent effects are incorporated into an optoacoustic wave equation, and a corresponding reconstruction method for OAT is developed. The improvement in image fidelity that can be achieved over conventional reconstruction methods is demonstrated by use of computer-simulation studies.", "In this work, we show how to incorporate attenuation into the optoacoustic tomography (OAT) imaging equation and develop a strategy for compensating for this attenuation during image reconstruction. In OAT, one exposes a sample to pulses of electromagnetic radiation that cause small amounts of heating in the specimen. The heating engenders thermal expansion which, in turn, gives rise to acoustic waves. The resulting acoustic pressure signal is generally measured by transducers arrayed around the object, and these data may be used to reconstruct images of the original electromagnetic absorption. Frequency-dependent absorption of the acoustic waves can lead to blurring and distortion in reconstructed images. We show that in the temporal frequency domain, the optoacoustic wave equation incorporating attenuation is equivalent to the inhomogeneous Helmholtz equation with a complex wave number. While some work has been done in other fields on directly solving Helmholtz equations with complex wave numbers, these are generally computationally intensive numerical approaches. We pursue a different approach, deriving an integral equation that relates the temporal optoacoustic signal at a given transducer location in the presence of attenuation to the ideal signal that would have been obtained in the absence of attenuation. This equation is readily discretized and the resulting linear system of equations involves a matrix that need only be inverted once, at which point the inverse can be used to correct all of the measured time signals prior to reconstruction by conventional methods." ] }
1704.07422
2608886898
High spatial resolution is a central aspect of tomographic imaging. In photoacoustic tomography, this requires accurate models for acoustic wave propagation and corresponding efficient image reconstruction methods. In this article we consider such models accounting for frequency dependent attenuation according to a wide class of attenuation laws that may include memory. We formulate the inverse problem of photoacoustic tomography in attenuating medium as ill-posed operator equation in a Hilbert space framework that is tackled by iterative regularization methods. For that purpose we derive explicit expressions for the adjoint problem that can efficiently be implemented. Our approach comes with a clear convergence analysis that is not shared by existing approaches for the considered model class. In contrast to time reversal the employed adjoint wave equation is again damping and thus has a stable solution. This stability property can be clearly seen in our numerical results. Indeed, if the attenuation is not too strong then the reconstruction results are even better than in the absence of attenuation. Moreover, the presented numerical results clearly demonstrate the efficiency and accuracy of the derived iterative reconstruction algorithms in various situations including the limited view case, where alternative methods cannot be directly applied.
A different class of algorithms extends the time reversal technique to the attenuated case (see @cite_48 @cite_24 @cite_38 @cite_3 @cite_37 @cite_69 @cite_4 @cite_0 ). Note that the time reversal of the attenuated wave equation yields a noise amplifying equation. Therefore regularization methods have to be incorporated in its numerical solution. Opposed to the time reversal, the adjoint wave equation used in our approach is again damping and no regularization is required for its stable solution. This yields a clear convergence analysis for our method by using standard regularization theory @cite_58 @cite_14 @cite_57 . We are not aware of similar existing results for PAT in attenuating acoustic media. The approaches which are closest to our work seem @cite_74 @cite_67 . In @cite_74 discrete iterative methods are considered, where the problem is first discretized and the adjoint is computed from the discretized problem. Further, both works @cite_74 @cite_67 use attenuation models based on the fractional Laplacian (see @cite_29 @cite_61 ) which yields an equation that is non-local in space. It is not obvious how to extend these approaches to model which can also include memory.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_37", "@cite_69", "@cite_4", "@cite_14", "@cite_67", "@cite_48", "@cite_29", "@cite_3", "@cite_24", "@cite_0", "@cite_57", "@cite_74", "@cite_58" ], "mid": [ "1999318406", "2075099673", "2032018000", "", "2543667640", "", "2962966677", "2952907103", "2052131944", "2963782715", "2797010778", "2133150693", "", "2015408813", "" ], "abstract": [ "The efficient simulation of wave propagation through lossy media in which the absorption follows a frequency power law has many important applications in biomedical ultrasonics. Previous wave equations which use time-domain fractional operators require the storage of the complete pressure field at previous time steps (such operators are convolution based). This makes them unsuitable for many three-dimensional problems of interest. Here, a wave equation that utilizes two lossy derivative operators based on the fractional Laplacian is derived. These operators account separately for the required power law absorption and dispersion and can be efficiently incorporated into Fourier based pseudospectral and k-space methods without the increase in memory required by their time-domain fractional counterparts. A framework for encoding the developed wave equation using three coupled first-order constitutive equations is discussed, and the model is demonstrated through several one-, two-, and three-dimensional simula...", "Photoacoustic imaging is based on the generation of acoustic waves in a semitransparent sample after illumination with short pulses of light or radio waves. The goal is to recover the spatial distribution of absorbed energy density inside the sample from acoustic pressure signals measured outside the sample (photoacoustic inverse problem). We have proposed a numerical method to calculate directly the time reversed field by retransmitting the measured pressure on the detection surface in reversed temporal order. This model-based time reversal method can solve the photoacoustic inverse problem exactly for an arbitrary closed detection surface. Recently we presented a set up which requires a single rotation axis and line detectors perpendicular to the rotation axis. Using a two-dimensional reconstruction method, such as time reversal in two dimensions, and applying the inverse two-dimensional radon transform afterwards gives an exact reconstruction of a three-dimensional sample with this set up. The resolution in photoacoustic imaging is limited by the acoustic bandwidth and therefore by acoustic attenuation, which can be substantial for high frequencies. This effect is usually ignored in reconstruction algorithms but has a strong impact on the resolution of small structures. It is demonstrated that the model based time reversal method allows to partly compensate this effect.", "In this paper, we derive time reversal imaging functionals for two strongly causal acoustic attenuation models, which have been proposed recently. The time reversal techniques are based on recently proposed ideas of for the thermo-viscous wave equation. Here and there, an asymptotic analysis provides reconstruction functionals from first order corrections for the attenuating effect. In addition, we present a novel approach for higher order corrections. Copyright © 2013 John Wiley & Sons, Ltd.", "", "In this article we study the reconstruction problem in TAT PAT on an attenuating media. Namely, we prove a reconstruction procedure of the initial condition for the damped wave equation via Neumann series that works for arbitrary large smooth attenuation coefficients extending the result of Homan in (2013 Inverse Problems Imaging 7 1235–50). We also illustrate the theoretical result by including some numerical experiments at the end of the paper.", "", "Inspired by the recent advances on minimizing nonsmooth or bound-constrained convex functions on models using varying degrees of fidelity, we propose a line search multi-grid (MG) method for full-wave iterative image reconstruction in photoacoustic tomography (PAT) in heterogeneous media. To compute the search direction at each iteration, we decide between the gradient at the target level, or alternatively an approximate error correction at a coarser level, relying on some predefined criteria. To incorporate absorption and dispersion, we derive the analytical adjoint directly from the first-order acoustic wave system. The effectiveness of the proposed method is tested on a total-variation penalized Iterative Shrinkage Thresholding algorithm (ISTA) and its accelerated variant (FISTA), which have been used in many studies of image reconstruction in PAT. The results show the great potential of the proposed method in improving speed of iterative image reconstruction.", "In this article we study the inverse problem of thermoacoustic tomography (TAT) on a medium with attenuation represented by a time- convolution (or memory) term, and whose consideration is motivated by the modeling of ultrasound waves in heterogeneous tissue via fractional derivatives with spatially dependent parameters. Under the assumption of being able to measure data on the whole boundary, we prove uniqueness and stability, and propose a convergent reconstruction method for a class of smooth variable sound speeds. By a suitable modification of the time reversal technique, we obtain a Neumann series reconstruction formula.", "Frequency-dependent attenuation typically obeys an empirical power law with an exponent ranging from 0 to 2. The standard time-domain partial differential equation models can describe merely two extreme cases of frequency-independent and frequency-squared dependent attenuations. The otherwise nonzero and nonsquare frequency dependency occurring in many cases of practical interest is thus often called the anomalous attenuation. In this study, a linear integro-differential equation wave model was developed for the anomalous attenuation by using the space-fractional Laplacian operation, and the strategy is then extended to the nonlinear Burgers equation. A new definition of the fractional Laplacian is also introduced which naturally includes the boundary conditions and has inherent regularization to ease the hypersingularity in the conventional fractional Laplacian. Under the Szabo’s smallness approximation, where attenuation is assumed to be much smaller than the wave number, the linear model is found consi...", "We consider a mathematical model of thermoacoustic tomography and other multi-wave imaging techniques with variable sound speed and attenuation. We find that a Neumann series reconstruction algorithm, previously studied under the assumption of zero attenuation, still converges if attenuation is sufficiently small. With complete boundary data, we show the inverse problem has a unique solution, and modified time reversal provides a stable reconstruction. We also consider partial boundary data, and in this case study those singularities that can be stably recovered.", "", "The reconstruction of photoacoustic images typically neglects the effect of acoustic absorption on the measured time domain signals. Here, a method to compensate for acoustic absorption in photoacoustic tomography is described. The approach is based on time-reversal image reconstruction and an absorbing equation of state which separately accounts for acoustic absorption and dispersion following a frequency power law. Absorption compensation in the inverse problem is achieved by reversing the absorption proportionality coefficient in sign but leaving the equivalent dispersion parameter unchanged. The reconstruction is regularized by filtering the absorption and dispersion terms in the spatial frequency domain using a Tukey window. This maintains the correct frequency dependence of these parameters within the filter pass band. The method is valid in one, two and three dimensions, and for arbitrary power law absorption parameters. The approach is verified through several numerical experiments. The reconstruction of a carbon fibre phantom and the vasculature in the abdomen of a mouse are also presented. When absorption compensation is included, a general improvement in the image magnitude and resolution is seen, particularly for deeper features.", "", "Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data.", "" ] }
1704.07528
2609715904
To watch 360 videos on normal 2D displays, we need to project the selected part of the 360 image onto the 2D display plane. In this paper, we propose a fully-automated framework for generating content-aware 2D normal-view perspective videos from 360 videos. Especially, we focus on the projection step preserving important image contents and reducing image distortion. Basically, our projection method is based on Pannini projection model. At first, the salient contents such as linear structures and salient regions in the image are preserved by optimizing the single Panini projection model. Then, the multiple Panini projection models at salient regions are interpolated to suppress image distortion globally. Finally, the temporal consistency for image projection is enforced for producing temporally stable normal-view videos. Our proposed projection method does not require any user-interaction and is much faster than previous content-preserving methods. It can be applied to not only images but also videos taking the temporal consistency of projection into account. Experiments on various 360 videos show the superiority of the proposed projection method quantitatively and qualitatively.
use a geometric model to project spherical panorama images onto an image plane. Rectilinear, Stereographic, and Pannini projection @cite_5 models belong to this category. Rectilinear projection is the perspective projection from the center point of the spherical image onto the image plane. This model preserves all lines, but the contents on the margin of a projected image can be extremely stretched and distorted when the FOV is large. On the other hand, stereographic projection is the perspective projection from the opposite point to the point of tangency as the center of projection. This model preserves a conformality of the contents, but not lines. Pannini projection based on the cylindrical projection keeps the vertical lines straight. Furthermore, the horizontal lines or radial lines are selectively preserved by the vertical compression. The methods mentioned above are very simple and fast, but have a common drawback --- they can not preserve all lines and salient objects simultaneously.
{ "cite_N": [ "@cite_5" ], "mid": [ "1595123527" ], "abstract": [ "The widely used rectilinear perspective projection cannot render realistic looking flat views with fields of view much wider than 70°. Yet 18th century artists known as 'view painters' depicted wider architectural scenes without visible perspective distortion.We have found no written records of how they did that, however, quantitative analysis of several works suggests that the key is a system for compressing horizontal angles while preserving certain straight lines important for the perspective illusion. We show that a simple double projection of the sphere to the plane, that we call the Pannini projection, can render images 150° or more wide with a natural appearance, reminiscent of vedutismo perspective. We give the mathematical formulas for realizing it numerically, in a general form that can be adjusted to suit a wide range of subject matter and field widths, and briefly compare it to other proposed alternatives to the rectilinear projection." ] }
1704.07528
2609715904
To watch 360 videos on normal 2D displays, we need to project the selected part of the 360 image onto the 2D display plane. In this paper, we propose a fully-automated framework for generating content-aware 2D normal-view perspective videos from 360 videos. Especially, we focus on the projection step preserving important image contents and reducing image distortion. Basically, our projection method is based on Pannini projection model. At first, the salient contents such as linear structures and salient regions in the image are preserved by optimizing the single Panini projection model. Then, the multiple Panini projection models at salient regions are interpolated to suppress image distortion globally. Finally, the temporal consistency for image projection is enforced for producing temporally stable normal-view videos. Our proposed projection method does not require any user-interaction and is much faster than previous content-preserving methods. It can be applied to not only images but also videos taking the temporal consistency of projection into account. Experiments on various 360 videos show the superiority of the proposed projection method quantitatively and qualitatively.
project images with partially different multiple models depending on the contents such as lines and objects. They are comparatively simple and produce less distortion compared with the single-model based projection. However, they also yield strong distortion at the border of regions where different models are applied. Zelnik-Manor @cite_2 proposed a multi-model based method that applies locally different projections depending on scene structure in the panoramic images with user interaction. Rectangling stereographic @cite_10 projects the spherical image onto a swung surface that is a combination of two orthogonal cylindrical projections with rounded edges. When the image is divided into four triangular regions with two diagonal lines, it respectively preserves vertical lines in left and right triangular regions and horizontal lines in upper and lower triangular regions of an image. However, it highly distorts the linear structures and objects straddling the diagonal lines.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2121169511", "2103189262" ], "abstract": [ "This paper proposes a new projection model for mapping a hemisphere to a plane. Such a model can be useful for viewing wide-angle images. Our model consists of two steps. In the first step, the hemisphere is projected onto a swung surface constructed by a circular profile and a rounded rectangular trajectory. The second step maps the projected image on the swung surface onto the image plane through the perspective projection. We also propose a method for automatically determining proper parameters for the projection model based on image content. The proposed model has several advantages. It is simple, efficient and easy to control. Most importantly, it makes a better compromise between distortion minimization and line preserving than popular projection models, such as stereographic and Pannini projections. Experiments and analysis demonstrate the effectiveness of our model.", "Pictures taken by a rotating camera cover the viewing sphere surrounding the center of rotation. Having a set of images registered and blended on the sphere what is left to be done, in order to obtain a flat panorama, is projecting the spherical image onto a picture plane. This step is unfortunately not obvious - the surface of the sphere may not be flattened onto a page without some form of distortion. The objective of this paper is discussing the difficulties and opportunities that are connected to the projection from viewing sphere to image plane. We first explore a number of alternatives to the commonly used linear perspective projection. These are 'global' projections and do not depend on image content. We then show that multiple projections may coexist successfully in the same mosaic: these projections are chosen locally and depend on what is present in the pictures. We show that such multi-view projections can produce more compelling results than the global projections" ] }
1704.07528
2609715904
To watch 360 videos on normal 2D displays, we need to project the selected part of the 360 image onto the 2D display plane. In this paper, we propose a fully-automated framework for generating content-aware 2D normal-view perspective videos from 360 videos. Especially, we focus on the projection step preserving important image contents and reducing image distortion. Basically, our projection method is based on Pannini projection model. At first, the salient contents such as linear structures and salient regions in the image are preserved by optimizing the single Panini projection model. Then, the multiple Panini projection models at salient regions are interpolated to suppress image distortion globally. Finally, the temporal consistency for image projection is enforced for producing temporally stable normal-view videos. Our proposed projection method does not require any user-interaction and is much faster than previous content-preserving methods. It can be applied to not only images but also videos taking the temporal consistency of projection into account. Experiments on various 360 videos show the superiority of the proposed projection method quantitatively and qualitatively.
try to minimize projection distortion using optimization techniques @cite_4 . Carroll @cite_4 proposed contents-preserving optimization-based projection method. It produces less distorted images than other approaches and well preserves important contents in the image. However, it is computationally much more expensive because of the iterative optimization process for every single point. Moreover, it requires non-trivial user interaction specifying straight lines to be preserved.
{ "cite_N": [ "@cite_4" ], "mid": [ "2057766517" ], "abstract": [ "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections." ] }
1704.07351
2609754840
Betweenness centrality is an important index widely used in different domains such as social networks, traffic networks and the world wide web. However, even for mid-size networks that have only a few hundreds thousands vertices, it is computationally expensive to compute exact betweenness scores. Therefore in recent years, several approximate algorithms have been developed. In this paper, first given a network @math and a vertex @math , we propose a Metropolis-Hastings MCMC algorithm that samples from the space @math and estimates betweenness score of @math . The stationary distribution of our MCMC sampler is the optimal sampling proposed for betweenness centrality estimation. We show that our MCMC sampler provides an @math -approximation, where the number of required samples depends on the position of @math in @math and in many cases, it is a constant. Then, given a network @math and a set @math , we present a Metropolis-Hastings MCMC sampler that samples from the joint space @math and @math and estimates relative betweenness scores of the vertices in @math . We show that for any pair @math , the ratio of the expected values of the estimated relative betweenness scores of @math and @math respect to each other is equal to the ratio of their betweenness scores. We also show that our joint-space MCMC sampler provides an @math -approximation of the relative betweenness score of @math respect to @math , where the number of required samples depends on the position of @math in @math and in many cases, it is a constant.
Everett and Borgatti @cite_14 defined as a natural extension of betweenness centrality for sets of vertices. Group betweenness centrality of a set is defined as the number of shortest paths passing through at least one of the vertices in the set @cite_14 . The other natural extension of betweenness centrality is . Co-betweenness centrality is defined as the number of shortest paths passing through all vertices in the set. Kolaczyk et. al. @cite_4 presented an @math time algorithm for co-betweenness centrality computation of sets of size 2. Chehreghani @cite_3 presented efficient algorithms for co-betweenness centrality computation of any set or sequence of vertices in weighted and unweighted networks. Puzis et. al. @cite_33 proposed an @math time algorithm for computing successive group betweenness centrality, where @math is the size of the set. The same authors in @cite_15 presented two algorithms for finding . A of a network is a set vertices of minimum size, so that every shortest path in the network passes through at least one of the vertices in the set. The first algorithm is based on a heuristic search and the second one is based on iterative greedy choice of vertices.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_3", "@cite_15" ], "mid": [ "2089370556", "1992229444", "2003408562", "", "1790404493" ], "abstract": [ "This paper extends the standard network centrality measures of degree, closeness and betweenness to apply to groups and classes as well as individuals. The group centrality measures will enable researchers to answer such questions as ‘how central is the engineering department in the informal influence network of this company?’ or ‘among middle managers in a given organization, which are more central, the men or the women?’ With these measures we can also solve the inverse problem: given the network of ties among organization members, how can we form a team that is maximally central? The measures are illustrated using two classic network data sets. We also formalize a measure of group centrality efficiency, which indicates the extent to which a group's centrality is principally due to a small subset of its members.", "Abstract Vertex betweenness centrality is a metric that seeks to quantify a sense of the importance of a vertex in a network in terms of its ‘control’ on the flow of information along geodesic paths throughout the network. Two natural ways to extend vertex betweenness centrality to sets of vertices are (i) in terms of geodesic paths that pass through at least one of the vertices in the set, and (ii) in terms of geodesic paths that pass through all vertices in the set. The former was introduced by Everett and Borgatti [Everett, M., Borgatti, S., 1999. The centrality of groups and classes. Journal of Mathematical Sociology 23 (3), 181–201], and called group betweenness centrality. The latter, which we call co-betweenness centrality here, has not been considered formally in the literature until now, to the best of our knowledge. In this paper, we show that these two notions of centrality are in fact intimately related and, furthermore, that this relationship may be exploited to obtain deeper insight into both. In particular, we provide an expansion for group betweenness in terms of increasingly higher orders of co-betweenness, in a manner analogous to the Taylor series expansion of a mathematical function in calculus. We then demonstrate the utility of this expansion by using it to construct analytic lower and upper bounds for group betweenness that involve only simple combinations of (i) the betweenness of individual vertices in the group, and (ii) the co-betweenness of pairs of these vertices. Accordingly, we argue that the latter quantity, i.e., pairwise co-betweenness, is itself a fundamental quantity of some independent interest, and we present a computationally efficient algorithm for its calculation, which extends the algorithm of Brandes [Brandes, U., 2001. A faster algorithm for betweenness centrality. Journal of Mathematical Sociology 25, 163] in a natural manner. Applications are provided throughout, using a handful of different communication networks, which serve to illustrate the way in which our mathematical contributions allow for insight to be gained into the interaction of network structure, coalitions, and information flow in social networks.", "In this paper, we propose a method for rapid computation of group betweenness centrality whose running time (after preprocessing) does not depend on network size. The calculation of group betweenness centrality is computationally demanding and, therefore, it is not suitable for applications that compute the centrality of many groups in order to identify new properties. Our method is based on the concept of path betweenness centrality defined in this paper. We demonstrate how the method can be used to find the most prominent group. Then, we apply the method for epidemic control in communication networks. We also show how the method can be used to evaluate distributions of group betweenness centrality and its correlation with group degree. The method may assist in finding further properties of complex networks and may open a wide range of research opportunities.", "", "In many applications we are required to locate the most prominent group of vertices in a complex network. Group Betweenness Centrality can be used to evaluate the prominence of a group of vertices. Evaluating the Betweenness of every possible group in order to find the most prominent is not computationally feasible for large networks. In this paper we present two algorithms for finding the most prominent group. The first algorithm is based on heuristic search and the second is based on iterative greedy choice of vertices. The algorithms were evaluated on random and scale-free networks. Empirical evaluation suggests that the greedy algorithm results were negligibly below the optimal result. In addition, both algorithms performed better on scale-free networks: heuristic search was faster and the greedy algorithm produced more accurate results. The greedy algorithm was applied for optimizing deployment of intrusion detection devices on network service provider infrastructure." ] }
1704.07351
2609754840
Betweenness centrality is an important index widely used in different domains such as social networks, traffic networks and the world wide web. However, even for mid-size networks that have only a few hundreds thousands vertices, it is computationally expensive to compute exact betweenness scores. Therefore in recent years, several approximate algorithms have been developed. In this paper, first given a network @math and a vertex @math , we propose a Metropolis-Hastings MCMC algorithm that samples from the space @math and estimates betweenness score of @math . The stationary distribution of our MCMC sampler is the optimal sampling proposed for betweenness centrality estimation. We show that our MCMC sampler provides an @math -approximation, where the number of required samples depends on the position of @math in @math and in many cases, it is a constant. Then, given a network @math and a set @math , we present a Metropolis-Hastings MCMC sampler that samples from the joint space @math and @math and estimates relative betweenness scores of the vertices in @math . We show that for any pair @math , the ratio of the expected values of the estimated relative betweenness scores of @math and @math respect to each other is equal to the ratio of their betweenness scores. We also show that our joint-space MCMC sampler provides an @math -approximation of the relative betweenness score of @math respect to @math , where the number of required samples depends on the position of @math in @math and in many cases, it is a constant.
Riondato and Kornaropoulos @cite_26 presented shortest path samplers for estimating betweenness centrality of all vertices or the @math vertices that have the highest betweenness scores in a graph. They determined the number of samples needed to approximate the betweenness with the desired accuracy and confidence by means of the VC-dimension theory @cite_23 . Recently, Riondato and Upfal @cite_26 introduced algorithms for estimating betweenness scores of all vertices in a graph. They also discussed a variant of the algorithm that finds the top- @math vertices. They used Rademacher average @cite_16 to determine the number of required samples. Finally, Borassi and Natale @cite_2 presented the KADABRA algorithm, which uses balanced bidirectional BFS (bb-BFS) to sample shortest paths. In bb-BFS, a BFS is performed from each of the two endpoints @math and @math , in such a way that they are likely to explore about the same number of edges.
{ "cite_N": [ "@cite_26", "@cite_2", "@cite_23", "@cite_16" ], "mid": [ "1029409534", "2344526012", "2029538739", "607505555" ], "abstract": [ "Betweenness centrality is a fundamental measure in social network analysis, expressing the importance or influence of individual vertices (or edges) in a network in terms of the fraction of shortest paths that pass through them. Since exact computation in large networks is prohibitively expensive, we present two efficient randomized algorithms for betweenness estimation. The algorithms are based on random sampling of shortest paths and offer probabilistic guarantees on the quality of the approximation. The first algorithm estimates the betweenness of all vertices (or edges): all approximate values are within an additive factor @math ??(0,1) from the real values, with probability at least @math 1-?. The second algorithm focuses on the top-K vertices (or edges) with highest betweenness and estimate their betweenness value to within a multiplicative factor @math ?, with probability at least @math 1-?. This is the first algorithm that can compute such approximation for the top-K vertices (or edges). By proving upper and lower bounds to the VC-dimension of a range set associated with the problem at hand, we can bound the sample size needed to achieve the desired approximations. We obtain sample sizes that are independent from the number of vertices in the network and only depend on a characteristic quantity that we call the vertex-diameter, that is the maximum number of vertices in a shortest path. In some cases, the sample size is completely independent from any quantitative property of the graph. An extensive experimental evaluation on real and artificial networks shows that our algorithms are significantly faster and much more scalable as the number of vertices grows than other algorithms with similar approximation guarantees.", "We present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time @math with high probability, obtaining a significant speedup with respect to the @math worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the @math most central nodes. Furthermore, our analysis is general, and it might be extended to other settings.", "This chapter reproduces the English translation by B. Seckler of the paper by Vapnik and Chervonenkis in which they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady. The paper was first published in Russian as Вапник В. Н. and Червоненкис А. Я. О равномерноЙ сходимости частот появления событиЙ к их вероятностям. Теория вероятностеЙ и ее применения 16(2), 264–279 (1971).", "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering." ] }
1704.07351
2609754840
Betweenness centrality is an important index widely used in different domains such as social networks, traffic networks and the world wide web. However, even for mid-size networks that have only a few hundreds thousands vertices, it is computationally expensive to compute exact betweenness scores. Therefore in recent years, several approximate algorithms have been developed. In this paper, first given a network @math and a vertex @math , we propose a Metropolis-Hastings MCMC algorithm that samples from the space @math and estimates betweenness score of @math . The stationary distribution of our MCMC sampler is the optimal sampling proposed for betweenness centrality estimation. We show that our MCMC sampler provides an @math -approximation, where the number of required samples depends on the position of @math in @math and in many cases, it is a constant. Then, given a network @math and a set @math , we present a Metropolis-Hastings MCMC sampler that samples from the joint space @math and @math and estimates relative betweenness scores of the vertices in @math . We show that for any pair @math , the ratio of the expected values of the estimated relative betweenness scores of @math and @math respect to each other is equal to the ratio of their betweenness scores. We also show that our joint-space MCMC sampler provides an @math -approximation of the relative betweenness score of @math respect to @math , where the number of required samples depends on the position of @math in @math and in many cases, it is a constant.
Lee et. al. @cite_19 proposed an algorithm to efficiently update betweenness centralities of vertices when the graph obtains a new edge. They reduced the search space by finding a candidate set of vertices whose betweenness centralities can be updated. Bergamini et. al. @cite_24 presented approximate algorithms that update betweenness scores of all vertices when an edge is inserted or the weight of an edge decreases. They used the algorithm of @cite_26 as the building block. Hayashi et. al. @cite_30 proposed a fully dynamic algorithm for estimating betweenness centrality of all vertices in a large dynamic network. Their algorithm is based on two data structures: hypergraph sketch that keeps track of SPDs, and two-ball index that helps to identify the parts of hypergraph sketches that require updates.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_26", "@cite_30" ], "mid": [ "2963957156", "2036977244", "1029409534", "" ], "abstract": [ "Betweenness centrality ranks the importance of nodes by their participation in all shortest paths of the network. Therefore computing exact betweenness values is impractical in large networks. For static networks, approximation based on randomly sampled paths has been shown to be significantly faster in practice. However, for dynamic networks, no approximation algorithm for betweenness centrality is known that improves on static recomputation. We address this deficit by proposing two incremental approximation algorithms (for weighted and unweighted connected graphs) which provide a provable guarantee on the absolute approximation error. Processing batches of edge insertions, our algorithms yield significant speedups up to a factor of 104 compared to restarting the approximation. This is enabled by investing memory to store and efficiently update shortest paths. As a building block, we also propose an asymptotically faster algorithm for updating the SSSP problem in unweighted graphs. Our experimental study shows that our algorithms are the first to make in-memory computation of a betweenness ranking practical for million-edge semi-dynamic networks. Moreover, our results show that the accuracy is even better than the theoretical guarantees in terms of absolute errors and the rank of nodes is well preserved, in particular for those with high betweenness.", "The betweenness centrality of a vertex in a graph is a measure for the participation of the vertex in the shortest paths in the graph. The Betweenness centrality is widely used in network analyses. Especially in a social network, the recursive computation of the betweenness centralities of vertices is performed for the community detection and finding the influential user in the network. Since a social network graph is frequently updated, it is necessary to update the betweenness centrality efficiently. When a graph is changed, the betweenness centralities of all the vertices should be recomputed from scratch using all the vertices in the graph. To the best of our knowledge, this is the first work that proposes an efficient algorithm which handles the update of the betweenness centralities of vertices in a graph. In this paper, we propose a method that efficiently reduces the search space by finding a candidate set of vertices whose betweenness centralities can be updated and computes their betweenness centeralities using candidate vertices only. As the cost of calculating the betweenness centrality mainly depends on the number of vertices to be considered, the proposed algorithm significantly reduces the cost of calculation. The proposed algorithm allows the transformation of an existing algorithm which does not consider the graph update. Experimental results on large real datasets show that the proposed algorithm speeds up the existing algorithm 2 to 2418 times depending on the dataset.", "Betweenness centrality is a fundamental measure in social network analysis, expressing the importance or influence of individual vertices (or edges) in a network in terms of the fraction of shortest paths that pass through them. Since exact computation in large networks is prohibitively expensive, we present two efficient randomized algorithms for betweenness estimation. The algorithms are based on random sampling of shortest paths and offer probabilistic guarantees on the quality of the approximation. The first algorithm estimates the betweenness of all vertices (or edges): all approximate values are within an additive factor @math ??(0,1) from the real values, with probability at least @math 1-?. The second algorithm focuses on the top-K vertices (or edges) with highest betweenness and estimate their betweenness value to within a multiplicative factor @math ?, with probability at least @math 1-?. This is the first algorithm that can compute such approximation for the top-K vertices (or edges). By proving upper and lower bounds to the VC-dimension of a range set associated with the problem at hand, we can bound the sample size needed to achieve the desired approximations. We obtain sample sizes that are independent from the number of vertices in the network and only depend on a characteristic quantity that we call the vertex-diameter, that is the maximum number of vertices in a shortest path. In some cases, the sample size is completely independent from any quantitative property of the graph. An extensive experimental evaluation on real and artificial networks shows that our algorithms are significantly faster and much more scalable as the number of vertices grows than other algorithms with similar approximation guarantees.", "" ] }
1704.07709
2610157343
Deep convolutional neural networks (DCNNs) are an influential tool for solving various problems in the machine learning and computer vision fields. In this paper, we introduce a new deep learning model called an Inception- Recurrent Convolutional Neural Network (IRCNN), which utilizes the power of an inception network combined with recurrent layers in DCNN architecture. We have empirically evaluated the recognition performance of the proposed IRCNN model using different benchmark datasets such as MNIST, CIFAR-10, CIFAR- 100, and SVHN. Experimental results show similar or higher recognition accuracy when compared to most of the popular DCNNs including the RCNN. Furthermore, we have investigated IRCNN performance against equivalent Inception Networks and Inception-Residual Networks using the CIFAR-100 dataset. We report about 3.5 , 3.47 and 2.54 improvement in classification accuracy when compared to the RCNN, equivalent Inception Networks, and Inception- Residual Networks on the augmented CIFAR- 100 dataset respectively.
Currently, most researches have been focused on improving the recognition accuracy of DCNN models. Very little research has been conducted on recurrent architectures within convolutional neural networks. The recurrent strategy is very important for context modeling from input samples. In 2015, @cite_0 proposed a RCNN structure for the first time that dealt with object recognition tasks. The architecture consists of several blocks of recurrent convolutional layers followed by a max-pooling layer. In the second to last layer of the structure global max-pooling is used, which is followed by a soft-max layer at the end. This architecture showed state-of-the-art accuracy for object classification at the time @cite_0 @cite_12 . The Long-term Recurrent Convolutional Network (LRCN) was proposed for visual recognition and description by This architecture uses a combination of two popular techniques, CNN and LSTM. The features are extracted through the CNN, and LSTM is applied to identify how features vary with respect to time. This model shows outstanding performance for visual description @cite_43 . From the above discussion, it can be concluded that DCNNs with improved architectures are showing enormous achievement when performing visual recognition tasks.
{ "cite_N": [ "@cite_0", "@cite_43", "@cite_12" ], "mid": [ "", "2951183276", "1546771929" ], "abstract": [ "", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accuracy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a recurrent convolutional neural network which allows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time." ] }
1704.07002
2608222948
Desynchronization is one of the primitive services for complex networks because it arranges nodes to take turns accessing a shared resource. TDMA is a practical application of desynchronization because it allows node to share a common medium. In this paper, we propose a novel desynchronization algorithm using artificial force field called Multi-hop Desynchronization With an ARtificial Force field or M-DWARF and use it to solve TDMA problems in wireless sensor networks (WSNs). M-DWARF solves hidden terminal problems in multi-hop networks by using relative time relaying and improves resource utilization by employing force absorption. M-DWARF is suitable for use in WSNs because of its low complexity and autonomous operations. We implement M-DWARF using TinyOS and experiment it on both TelosB motes and TOSSIM. From our investigation, M-DWARF is the only desynchronization algorithm that achieves fast convergence with high stability, while maintaining channel utilization fairness. Moreover, we are the first to provide a stability analysis using dynamical systems and to prove that M-DWARF is stable at equilibrium. (This paper is the first part of the series Stable Desynchronization for Wireless Sensor Networks - (I) Concepts and Algorithms (II) Performance Evaluation (III) Stability Analysis)
Other works that are related to desynchronization protocols are distributed Time Division Multiple Access (TDMA) protocols. Distributed TDMA protocols are similar to M-DESYNC @cite_0 ; their protocols work on a granularity of time slots. Similar to M-DESYNC, many of the distributed TDMA protocols such as TRAMA @cite_3 , Parthasarathy @cite_20 , ED-TDMA @cite_22 , and Herman @cite_19 assume time is already slotted or all nodes are synchronized to achieve the same global clock. In our work, we do not require time synchronization and do not assume already slotted time. S. C. @cite_23 propose node-based and level-based TDMA scheduling algorithms for WSNs. Their techniques mainly derive from graph coloring algorithms in which the must have been predefined. In contrast, our desynchronization algorithms never predefine slots but rather allow nodes to adjust their slots with those in the neighborhood on-the-fly. K. S. @cite_17 survey distributed scheduling techniques for wireless mesh networks and A. @cite_14 provide an extensive survey of recent advances in TDMA algorithms in wireless multi-hop networks. Both the survey papers interestingly give a comprehensive overview of the TDMA scheduling algorithms and techniques that are still being investigated and further developed by wireless network researchers worldwide.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_20", "@cite_17" ], "mid": [ "1966146195", "2133591145", "1986917282", "2162626583", "2950588791", "", "", "2116567741" ], "abstract": [ "One of the major problems in wireless multihop networks is the scheduling of transmissions in a fair and efficient manner. Time Division Multiple Access (TDMA) seems to be one of the dominant solutions to achieve this goal since it is a simple scheme and can prolong the devices’ lifetime by allowing them to transmit only a portion of the time during conversation. For that reason, several TDMA scheduling algorithms may be found in the literature. The scope of this article is to classify the existing TDMA scheduling algorithms based on several factors, such as the entity that is scheduled, the network topology information that is needed to produce or maintain the schedule, and the entity or entities that perform the computation that produces and maintains the schedules, and to discuss the advantages and disadvantages of each category.", "MAC protocol controls the activity of wireless radio of sensor nodes directly so that it is the major consumer of sensor energy and the energy efficiency of MAC protocol makes a strong impact on the network performance. TDMA-based MAC protocol is inherently collision-free and can rule out idle listening since nodes know when to transmit. However, conventional TDMA protocol is not suitable for event-driven applications. In this paper, we present ED-TDMA, an event-driven TDMA protocol for wireless sensor networks. Then we conduct extensive simulations to compare it with other MAC protocols such as BMA, S-MAC, and LMAC. Simulation results show that ED-TDMA performs better for event-driven application in wireless sensor networks with high-density deployment and under low traffic.", "The traffic-adaptive medium access protocol (TRAMA) is introduced for energy-efficient collision-free channel access in wireless sensor networks. TRAMA reduces energy consumption by ensuring that unicast, multicast, and broadcast transmissions have no collisions, and by allowing nodes to switch to a low-power, idle state whenever they are not transmitting or receiving. TRAMA assumes that time is slotted and uses a distributed election scheme based on information about the traffic at each node to determine which node can transmit at a particular time slot. TRAMA avoids the assignment of time slots to nodes with no traffic to send, and also allows nodes to determine when they can become idle and not listen to the channel using traffic information. TRAMA is shown to be fair and correct, in that no idle node is an intended receiver and no receiver suffers collisions. The performance of TRAMA is evaluated through extensive simulations using both synthetic- as well as sensor-network scenarios. The results indicate that TRAMA outperforms contention-based protocols (e.g., CSMA, 802.11 and S-MAC) as well as scheduling-based protocols (e.g., NAMA) with significant energy savings.", "This paper presents a new desynchronization algorithm aimed at providing collision-free transmission scheduling for single-hop and acyclic multi-hop wireless sensor networks. The desynchronization approach is resilient to the hidden terminal problem and topology changes. Each node distributively converges upon a single collision-free transmission slot, utilizing only minimal neighbor information. In addition, we propose two strategies which facilitate increased convergence time. We evaluate the proposed algorithm via simulations over a range of network densities on both single-hop and acyclic multi- hop networks. Convergence and throughput comparison are performed against two previously proposed desynchronization algorithms. Finally, using an experimental tested of TelosB motes we verify the performance, practicality, and correctness of the desynchronization algorithm on varying network topologies.", "Wireless sensor networks benefit from communication protocols that reduce power requirements by avoiding frame collision. Time Division Media Access methods schedule transmission in slots to avoid collision, however these methods often lack scalability when implemented in networks subject to node failures and dynamic topology. This paper reports a distributed algorithm for TDMA slot assignment that is self-stabilizing to transient faults and dynamic topology change. The expected local convergence time is O(1) for any size network satisfying a constant bound on the size of a node neighborhood.", "", "", "An efficient scheduling scheme is a crucial part of Wireless Mesh Networks (WMNs)—an emerging communication infrastructure solution for autonomy, scalability, higher throughput, lower delay metrics, energy efficiency, and other service-level guarantees. Distributed schedulers are preferred due to better scalability, smaller setup delays, smaller management overheads, no single point of failure, and for avoiding bottlenecks. Based on the sequence in which nodes access the shared medium, repetitiveness, and determinism, distributed schedulers that are supported by wireless mesh standards can be classified as either random, pseudo-random, or cyclic schemes. We performed qualitative and quantitative studies that show the strengths and weaknesses of each category, and how the schemes complement each other. We discuss how wireless standards with mesh definitions have evolved by incorporating and enhancing one or more of these schemes. Emerging trends and research problems remaining for future research also have been identified." ] }
1704.07077
2608614416
Multi-attributed graph matching is a problem of finding correspondences between two sets of data while considering their complex properties described in multiple attributes. However, the information of multiple attributes is likely to be oversimplified during a process that makes an integrated attribute, and this degrades the matching accuracy. For that reason, a multi-layer graph structure-based algorithm has been proposed recently. It can effectively avoid the problem by separating attributes into multiple layers. Nonetheless, there are several remaining issues such as a scalability problem caused by the huge matrix to describe the multi-layer structure and a back-projection problem caused by the continuous relaxation of the quadratic assignment problem. In this work, we propose a novel multi-attributed graph matching algorithm based on the multi-layer graph factorization. We reformulate the problem to be solved with several small matrices that are obtained by factorizing the multi-layer structure. Then, we solve the problem using a convex-concave relaxation procedure for the multi-layer structure. The proposed algorithm exhibits better performance than state-of-the-art algorithms based on the single-layer structure.
The path following algorithm was firstly proposed by Zaslavskiy al @cite_31 . They formulated a graph matching problem on an adjacency matrix, and solved the problem by using a convex-concave relaxation approach. Zhou and De la Torre @cite_25 @cite_15 proposed the methods that factorize the affinity matrix, and applied the algorithm to the graph matching. In their work, the convex-concave relaxation of the objective function was derived by using the factorized matrices. Liu al @cite_23 extended the original algorithm for solving the directed graph matching problems defined on an adjacency matrix by modifying the concave relaxation method. They then generalized the algorithm to solve the problems that are defined for the affinity matrix and partial permutation matrix respectively in @cite_32 @cite_29 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_32", "@cite_23", "@cite_31", "@cite_25" ], "mid": [ "2094539604", "", "2336641195", "2068627296", "2949291209", "2076080556" ], "abstract": [ "Graph matching (GM) is a fundamental problem in computer science, and it has been successfully applied to many problems in computer vision. Although widely used, existing GM algorithms cannot incorporate global consistence among nodes, which is a natural constraint in computer vision problems. This paper proposes deformable graph matching (DGM), an extension of GM for matching graphs subject to global rigid and non-rigid geometric constraints. The key idea of this work is a new factorization of the pair-wise affinity matrix. This factorization decouples the affinity matrix into the local structure of each graph and the pair-wise affinity edges. Besides the ability to incorporate global geometric transformations, this factorization offers three more benefits. First, there is no need to compute the costly (in space and time) pair-wise affinity matrix. Second, it provides a unified view of many GM methods and extends the standard iterative closest point algorithm. Third, it allows to use the path-following optimization algorithm that leads to improved optimization strategies and matching performance. Experimental results on synthetic and real databases illustrate how DGM outperforms state-of-the-art algorithms for GM. The code is available at http: humansensing.cs.cmu.edu fgm.", "", "In this paper we propose the graduated nonconvexity and concavity procedure (GNCCP) as a general optimization framework to approximately solve the combinatorial optimization problems defined on the set of partial permutation matrices. GNCCP comprises two sub-procedures, graduated nonconvexity which realizes a convex relaxation and graduated concavity which realizes a concave relaxation. It is proved that GNCCP realizes exactly a type of convex-concave relaxation procedure (CCRP), but with a much simpler formulation without needing convex or concave relaxation in an explicit way. Actually, GNCCP involves only the gradient of the objective function and is therefore very easy to use in practical applications. Two typical related NP-hard problems, partial graph matching and quadratic assignment problem (QAP), are employed to demonstrate its simplicity and state-of-the-art performance.", "The path following algorithm was proposed recently to approximately solve the matching problems on undirected graph models and exhibited a state-of-the-art performance on matching accuracy. In this paper, we extend the path following algorithm to the matching problems on directed graph models by proposing a concave relaxation for the problem. Based on the concave and convex relaxations, a series of objective functions are constructed, and the Frank-Wolfe algorithm is then utilized to minimize them. Several experiments on synthetic and real data witness the validity of the extended path following algorithm.", "We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We therefore construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore to perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four datasets: simulated graphs, QAPLib, retina vessel images and handwritten chinese characters. In all cases, the results are competitive with the state-of-the-art.", "Graph matching plays a central role in solving correspondence problems in computer vision. Graph matching problems that incorporate pair-wise constraints can be cast as a quadratic assignment problem (QAP). Unfortunately, QAP is NP-hard and many algorithms have been proposed to solve different relaxations. This paper presents factorized graph matching (FGM), a novel framework for interpreting and optimizing graph matching problems. In this work we show that the affinity matrix can be factorized as a Kronecker product of smaller matrices. There are three main benefits of using this factorization in graph matching: (1) There is no need to compute the costly (in space and time) pair-wise affinity matrix; (2) The factorization provides a taxonomy for graph matching and reveals the connection among several methods; (3) Using the factorization we derive a new approximation of the original problem that improves state-of-the-art algorithms in graph matching. Experimental results in synthetic and real databases illustrate the benefits of FGM. The code is available at http: humansensing.cs.cmu.edu fgm." ] }
1704.07077
2608614416
Multi-attributed graph matching is a problem of finding correspondences between two sets of data while considering their complex properties described in multiple attributes. However, the information of multiple attributes is likely to be oversimplified during a process that makes an integrated attribute, and this degrades the matching accuracy. For that reason, a multi-layer graph structure-based algorithm has been proposed recently. It can effectively avoid the problem by separating attributes into multiple layers. Nonetheless, there are several remaining issues such as a scalability problem caused by the huge matrix to describe the multi-layer structure and a back-projection problem caused by the continuous relaxation of the quadratic assignment problem. In this work, we propose a novel multi-attributed graph matching algorithm based on the multi-layer graph factorization. We reformulate the problem to be solved with several small matrices that are obtained by factorizing the multi-layer structure. Then, we solve the problem using a convex-concave relaxation procedure for the multi-layer structure. The proposed algorithm exhibits better performance than state-of-the-art algorithms based on the single-layer structure.
The proposed algorithm is inspired by the factorization-based convex-concave relaxation approach @cite_25 @cite_15 . Since multi-attributed graph matching problems that are formulated using a multi-layer structure require a large matrix to describe its structure, the factorization-based relaxation scheme is very useful in resolving the scalability issue.
{ "cite_N": [ "@cite_15", "@cite_25" ], "mid": [ "2094539604", "2076080556" ], "abstract": [ "Graph matching (GM) is a fundamental problem in computer science, and it has been successfully applied to many problems in computer vision. Although widely used, existing GM algorithms cannot incorporate global consistence among nodes, which is a natural constraint in computer vision problems. This paper proposes deformable graph matching (DGM), an extension of GM for matching graphs subject to global rigid and non-rigid geometric constraints. The key idea of this work is a new factorization of the pair-wise affinity matrix. This factorization decouples the affinity matrix into the local structure of each graph and the pair-wise affinity edges. Besides the ability to incorporate global geometric transformations, this factorization offers three more benefits. First, there is no need to compute the costly (in space and time) pair-wise affinity matrix. Second, it provides a unified view of many GM methods and extends the standard iterative closest point algorithm. Third, it allows to use the path-following optimization algorithm that leads to improved optimization strategies and matching performance. Experimental results on synthetic and real databases illustrate how DGM outperforms state-of-the-art algorithms for GM. The code is available at http: humansensing.cs.cmu.edu fgm.", "Graph matching plays a central role in solving correspondence problems in computer vision. Graph matching problems that incorporate pair-wise constraints can be cast as a quadratic assignment problem (QAP). Unfortunately, QAP is NP-hard and many algorithms have been proposed to solve different relaxations. This paper presents factorized graph matching (FGM), a novel framework for interpreting and optimizing graph matching problems. In this work we show that the affinity matrix can be factorized as a Kronecker product of smaller matrices. There are three main benefits of using this factorization in graph matching: (1) There is no need to compute the costly (in space and time) pair-wise affinity matrix; (2) The factorization provides a taxonomy for graph matching and reveals the connection among several methods; (3) Using the factorization we derive a new approximation of the original problem that improves state-of-the-art algorithms in graph matching. Experimental results in synthetic and real databases illustrate the benefits of FGM. The code is available at http: humansensing.cs.cmu.edu fgm." ] }
1704.07079
2609624461
Communications using frequency bands in the millimeter-wave range can play a key role in future generations of mobile networks. By allowing large bandwidth allocations, high carrier frequencies will provide high data rates to support the ever-growing capacity demand. The prevailing challenge at high frequencies is the mitigation of large path loss and link blockage effects. Highly directional beams are expected to overcome this challenge. In this paper, we propose a stochastic model for characterizing beam coverage probability. The model takes into account both line-of-sight and first-order non-line-of-sight reflections. We model the scattering environment as a stochastic process and we derive an analytical expression of the coverage probability for any given beam. The results derived are validated numerically and compared with simulations to assess the accuracy of the model.
Communications using mm-waves have been initially investigated for indoor and short range applications, where propagation is facilitated by line-of-sight (LOS) conditions and low-mobility. @cite_1 , the authors propose two algorithms for beam searching, selection and tracking in wireless local area networks. They discretize the set of beams and find, by using iterative search, the best beam pair for the transmitter and the receiver. Similarly, the authors in @cite_5 develop a method that compensates link blockage by switching between the LOS link and a non line-of-sight (NLOS) link, whenever the former is blocked. However, they do not provide any analytical model of the beam coverage and blockage probability.
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2165073164", "2166486216" ], "abstract": [ "In this paper, we propose a solution to resolve link blockage problem in 60 GHz WPANs. Line-of-Sight (LOS) link is easily blocked by a moving person, which is concerned as one of the severe problems in 60 GHz systems. Beamforming is a feasible technique to resolve link blockage by switching the beam path from LOS link to a Non-LOS (NLOS) link. We propose and evaluate two kinds of Beam Switching (BS) mechanisms: instant decision based BS and environment learning based BS. We examine these mechanisms in a typical indoor WPAN scenario. Extensive simulations have been carried out, and our results reveal that combining angle-of-arrival with the received signal to noise ratio could make better decision for beam switching. Our work provides valuable observations for beam switching during point-to-point communication using 60 GHz radio.", "In order to realize high speed, long range, reliable transmission in millimeter-wave 60 GHz wireless personal area networks (60 GHz WPANs), we propose a beamforming (BF) protocol realized in media access control (MAC) layer on top of multiple physical layer (PHY) designs. The proposed BF protocol targets to minimize the BF set-up time and to mitigate the high path loss of 60 GHz WPAN systems. It consists of 3 stages, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching. The division of the stages facilitates significant reduction in setup time as compared to BF protocols with exhaustive searching mechanisms. The proposed BF protocol employs discrete phase-shifters, which significantly simplifies the structure of DEVs as compared to the conventional BF with phase-and-amplitude adjustment, at the expense of a gain degradation of less than 1 dB. The proposed BF protocol is a complete design and PHY-independent, it is applicable to different antenna configurations. Simulation results show that the setup time of the proposed BF protocol is as small as 2 when compared to the exhaustive searching protocol. Furthermore, based on the codebooks with four phases per element, around 15.1 dB gain is achieved by using eight antenna elements at both transmitter and receiver, thereby enabling 1.6 Gbps-data-streaming over a range of three meters. Due to the flexibility in supporting multiple PHY layer designs, the proposed protocol has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems." ] }
1704.07079
2609624461
Communications using frequency bands in the millimeter-wave range can play a key role in future generations of mobile networks. By allowing large bandwidth allocations, high carrier frequencies will provide high data rates to support the ever-growing capacity demand. The prevailing challenge at high frequencies is the mitigation of large path loss and link blockage effects. Highly directional beams are expected to overcome this challenge. In this paper, we propose a stochastic model for characterizing beam coverage probability. The model takes into account both line-of-sight and first-order non-line-of-sight reflections. We model the scattering environment as a stochastic process and we derive an analytical expression of the coverage probability for any given beam. The results derived are validated numerically and compared with simulations to assess the accuracy of the model.
Lately, the focus has shifted towards the application of mm-waves in outdoor scenarios and cellular systems. @cite_6 @cite_8 , the propagation characteristics of mm-waves are investigated. The study in @cite_6 collects measurements taken in New York at 28 and 38 GHz. Results show that, when a high directional antenna array is used, path loss does not create a significant impediment to the propagation and it is still possible to reach the typical cell coverage of a high density urban environment. Based on the measurements reported in @cite_6 , @cite_9 derives a statistical channel model for the path loss, the number of spatial clusters, the angular dispersion, and the outage probability.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_8" ], "mid": [ "1991056455", "2116334496", "" ], "abstract": [ "With the severe spectrum shortage in conventional cellular bands, millimeter wave (mmW) frequencies between 30 and 300 GHz have been attracting growing attention as a possible candidate for next-generation micro- and picocellular wireless networks. The mmW bands offer orders of magnitude greater spectrum than current cellular allocations and enable very high-dimensional antenna arrays for further gains via beamforming and spatial multiplexing. This paper uses recent real-world measurements at 28 and 73 GHz in New York, NY, USA, to derive detailed spatial statistical models of the channels and uses these models to provide a realistic assessment of mmW micro- and picocellular networks in a dense urban deployment. Statistical models are derived for key channel parameters, including the path loss, number of spatial clusters, angular dispersion, and outage. It is found that, even in highly non-line-of-sight environments, strong signals can be detected 100-200 m from potential cell sites, potentially with multiple clusters to support spatial multiplexing. Moreover, a system simulation based on the models predicts that mmW systems can offer an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks with no increase in cell density from current urban deployments.", "The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.", "" ] }
1704.07138
2608747952
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence @math , by maximizing @math . Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
Most related work to date has presented modifications of SMT systems for specific usecases which constrain MT output via auxilliary inputs. The largest body of work considers (IMT): an MT system searches for the optimal target-language suffix given a complete source sentence and a desired prefix for the target output @cite_4 @cite_0 @cite_5 . IMT can be viewed as sub-case of constrained decoding, where there is only one constraint which is guaranteed to be placed at the beginning of the output sequence. introduce , which modifies the SMT beam search to first ensure that the is covered, and only then continues to build hypotheses for the suffix using beams organized by coverage of the remaining phrases in the source segment. and also present a simple modification of NMT models for IMT, enabling models to predict suffixes for user-supplied prefixes.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "1646038686", "", "192863543" ], "abstract": [ "This book presents, characterizes and analyzes problem solving strategies that are guided by heuristic information", "", "Demand for the services of translators is on the increase, and consequently so is the demand for tools to help them improve their productivity. This thesis proposes a novel tool intended to give a translator interactive access to the most powerful translation technology available: a machine translation system. The main new idea is to use the target text being produced as the medium of interaction with the computer. In contrast to previous approaches, this is natural and flexible, placing the translator in full control of the translation process, but giving the tool scope to contribute when it can usefully do so. A simple version of this idea is a system that tries to predict target text in real time as a translator types. This can aid by speeding typing and suggesting ideas, but it can also hinder by distracting the translator, as previous studies have demonstrated. I present a new method for text prediction that aims explicitly at maximizing a translator's productivity according to a model of user characteristics. Simulations show that this approach has the potential to improve the productivity of an average translator by over 10 . The core of the text prediction method presented here is the statistical model used to estimate the probability of upcoming text. This must be as accurate as possible, but also efficient enough to support real-time searching. I describe new models based on the technique of maximum entropy that are specifically designed to balance accuracy and efficiency for the prediction application. These outperform equivalent baseline models used in prior work by about 50 according to an empirical measure of predictive accuracy, with no sacrifice in efficiency." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
Retrieval-based methods search for visually similar images to the input image, and find the best caption from the retrieved image captions. For example, Devlin al in @cite_25 propose a K-nearest neighbor approach that finds the caption that best represents the set of candidate captions gathered from neighbor images. This method suffers from an obvious problem that the generated captions are always from an existing caption set, and thus it is unable to generate novel captions.
{ "cite_N": [ "@cite_25" ], "mid": [ "1706899115" ], "abstract": [ "We explore a variety of nearest neighbor baseline approaches for image captioning. These approaches find a set of nearest neighbor images in the training set from which a caption may be borrowed for the query image. We select a caption for the query image by finding the caption that best represents the \"consensus\" of the set of candidate captions gathered from the nearest neighbor images. When measured by automatic evaluation metrics on the MS COCO caption evaluation server, these approaches perform as well as many recent approaches that generate novel captions. However, human studies show that a method that generates novel captions is still preferred over the nearest neighbor approach." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
Template-based methods generate image captions from pre-defined templates, and fill the template with detected objects, scenes and attributes. Farhadi al @cite_6 use single @math object, action, scene @math triple to represent a caption, and learns the mapping from images and sentences separately to the triplet meaning space. Kulkarni al @cite_43 detect objects and attributes in an image as well as their prepositional relationship, and use a CRF to predict the best structure containing those objects, modifiers and relationships. In @cite_48 , Lebret al predict phrases from an image, and combine them with a simple language model to generate the description. These approaches heavily rely on the templates or simple grammars, and so generate rigid captions.
{ "cite_N": [ "@cite_43", "@cite_6", "@cite_48" ], "mid": [ "1969616664", "1897761818", "2949447259" ], "abstract": [ "We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.", "Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.", "Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
Language model-based methods typically learn the common embedding space of images and captions, and generate novel captions without many rigid syntactical constraints. Kiros and Zemel @cite_30 propose multimodal log-bilinear models conditioned on image features. Mao al @cite_0 propose a Multimodal Recurrent Neural Network (MRNN) that uses an RNN to learn the text embedding, and a CNN to learn the image representation. Vinyals al @cite_40 use LSTM as the decoder to generate sentences, and provide the image features as input to the LSTM directly. Xu al @cite_51 further introduce an attention-based model that can learn where to look while generating corresponding words. You al @cite_4 use pre-generated semantic concept proposals to guide the caption generation, and learn to selectively attend to those concepts at different time-steps. Similarly, Wu al @cite_50 also show that high level semantic features can improve the caption generation performance.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_0", "@cite_40", "@cite_50", "@cite_51" ], "mid": [ "2171361956", "2953022248", "1811254738", "2951912364", "2404394533", "2950178297" ], "abstract": [ "We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
Parsing of a sentence is the process of analyzing the sentence according to a set of grammar rules, and generating a rooted parse tree that represents the syntactic structure of the sentence @cite_1 . There is some language-model-based work that parses the captions for better sentence encoding. For example, Socher al @cite_17 proposed the Dependency Tree-RNN, which uses dependency trees to embed sentences into a vector space, and then performs caption retrieval with the embedded vector. Unfortunately, the model is unable to generate novel sentences.
{ "cite_N": [ "@cite_1", "@cite_17" ], "mid": [ "2097606805", "2149557440" ], "abstract": [ "We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36 (LP LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.", "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
The work that is closest to our own is the hierarchical LSTM model proposed by Tan and Chan @cite_31 . They view captions as a combination of noun phrases and other words, and try to predict the noun phrases (together with the other words) directly with an LSTM.The noun phrases are encoded into a vector representation with a separate LSTM. In the inference stage, K image-relevant phrases are generated first with the lower level LSTM. Then, the upper level LSTM generates the sentence that contains both the noun phrase" token and other words. When a noun phrase is generated, suitable phrases from the phrase pool are selected, and then used as the input to the next time-step. This work is relevant to ours in that it also tries to break the original word order of the caption. However, it directly replaces the phrases with a single word phrase token" in the upper level LSTM without distinguishing those tokens, although the phrases can be very different. Also, the phrases in an image are generated ahead of the sentence generation, without knowing the sentence structure or the location to attend to.
{ "cite_N": [ "@cite_31" ], "mid": [ "2511990674" ], "abstract": [ "A picture is worth a thousand words. Not until recently, however, we noticed some success stories in understanding of visual scenes: a model that is able to detect name objects, describe their attributes, and recognize their relationships interactions. In this paper, we propose a phrase-based hierarchical Long Short-Term Memory (phi-LSTM) model to generate image description. The proposed model encodes sentence as a sequence of combination of phrases and words, instead of a sequence of words alone as in those conventional solutions. The two levels of this model are dedicated to i) learn to generate image relevant noun phrases, and ii) produce appropriate image description from the phrases and other words in the corpus. Adopting a convolutional neural network to learn image features and the LSTM to learn the word sequence in a sentence, the proposed model has shown better or competitive results in comparison to the state-of-the-art models on Flickr8k and Flickr30k datasets." ] }
1704.06972
2952743010
Recently, there has been a lot of interest in automatically generating descriptions for an image. Most existing language-model based approaches for this task learn to generate an image description word by word in its original word order. However, for humans, it is more natural to locate the objects and their relationships first, and then elaborate on each object, describing notable attributes. We present a coarse-to-fine method that decomposes the original image description into a skeleton sentence and its attributes, and generates the skeleton sentence and attribute phrases separately. By this decomposition, our method can generate more accurate and novel descriptions than the previous state-of-the-art. Experimental results on the MS-COCO and a larger scale Stock3M datasets show that our algorithm yields consistent improvements across different evaluation metrics, especially on the SPICE metric, which has much higher correlation with human ratings than the conventional metrics. Furthermore, our algorithm can generate descriptions with varied length, benefiting from the separate control of the skeleton and attributes. This enables image description generation that better accommodates user preferences.
Evaluation of image caption generation is as challenging as the task itself. Bleu @cite_7 , CIDEr @cite_36 , METEOR @cite_8 , and ROUGE @cite_41 are common metrics used for evaluating most image captioning benchmarks such as MS-COCO and Flickr30K. However, these metrics are very sensitive to n-gram overlap, which may not necessarily be a good way to measure the quality of an image description. Recently, Anderson al @cite_47 introduced a new evaluation metric called SPICE that overcomes this problem. SPICE uses a graph-based semantic representation to encode the objects, attributes and relationships in the image. They show that SPICE has a much higher correlation with human judgement than the conventional evaluation metrics.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_36", "@cite_41", "@cite_47" ], "mid": [ "2101105183", "2133459682", "2952574180", "2154652894", "2950201573" ], "abstract": [ "Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.", "This paper describes Meteor Universal, released for the 2014 ACL Workshop on Statistical Machine Translation. Meteor Universal brings language specific evaluation to previously unsupported target languages by (1) automatically extracting linguistic resources (paraphrase tables and function word lists) from the bitext used to train MT systems and (2) using a universal parameter set learned from pooling human judgments of translation quality from several language directions. Meteor Universal is shown to significantly outperform baseline BLEU on two new languages, Russian (WMT13) and Hindi (WMT14).", "Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric (CIDEr) that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.", "ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.", "There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors?' and can caption-generators count?'" ] }
1704.06986
2609370997
Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the "bursty" distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus, MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.
Open vocabulary neural language models have been widely explored [] sutskever2011generating,mikolov2012subword,graves2013generating . please add citations here, make sure to get Sutskever, Mikolov Attempts to make them more aware of word-level dynamics, using models similar to our hierarchical formulation, have also been proposed @cite_32 .
{ "cite_N": [ "@cite_32" ], "mid": [ "2510842514" ], "abstract": [ "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling." ] }
1704.07333
2608915011
To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.
Bounding-box based object detectors have improved steadily in the past few years. R-CNN, a particularly successful family of methods @cite_17 @cite_1 @cite_9 , is a two-stage approach in which the first stage proposes candidate RoIs and the second stage performs object classification. Region-wise features can be rapidly extracted @cite_27 @cite_1 from shared feature maps by an RoI pooling operation. Feature sharing speeds up instance-level detection and enables recognizing higher-order interactions, which would be computationally infeasible otherwise. Our method is based on the Fast Faster R-CNN frameworks @cite_1 @cite_9 .
{ "cite_N": [ "@cite_27", "@cite_9", "@cite_1", "@cite_17" ], "mid": [ "2179352600", "2953106684", "", "2102605133" ], "abstract": [ "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1704.07333
2608915011
To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.
Research on visual relationship modeling @cite_10 @cite_2 @cite_19 @cite_18 has attracted increasing attention. Recently, Lu al @cite_19 proposed to recognize visual relationships derived from an open-world vocabulary. The set of relationships include verbs ( , wear ), spatial ( , next to ), actions ( , ride ) or a preposition phrase ( , drive on ). Our focus is related, but different. First, we aim to understand interactions, which take place in particularly diverse and interesting ways. These relationships involve direct interaction with objects ( , person cutting cake ), unlike spatial or prepositional phrases ( , dog next to dog ). Second, we aim to build detectors that recognize interactions in images with high precision, which is a requirement for practical applications. In contrast, in an open-world recognition setting, evaluating precision is not feasible, resulting in recall-based evaluation, as in @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_10", "@cite_2" ], "mid": [ "2479423890", "2423576022", "2049705550", "1551928752" ], "abstract": [ "Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.", "This paper introduces situation recognition, the problem of producing a concise summary of the situation an image depicts including: (1) the main activity (e.g., clipping), (2) the participating actors, objects, substances, and locations (e.g., man, shears, sheep, wool, and field) and most importantly (3) the roles these participants play in the activity (e.g., the man is clipping, the shears are his tool, the wool is being clipped from the sheep, and the clipping is in a field). We use FrameNet, a verb and role lexicon developed by linguists, to define a large space of possible situations and collect a large-scale dataset containing over 500 activities, 1,700 roles, 11,000 objects, 125,000 images, and 200,000 unique situations. We also introduce structured prediction baselines and show that, in activity-centric images, situation-driven prediction of objects and activities outperforms independent object and activity recognition.", "In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.", "In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work." ] }
1704.07333
2608915011
To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.
Human-object interactions @cite_36 @cite_26 @cite_32 are related to visual relationships, but present different challenges. Human actions are more fine-grained ( , walking , running , surfing , snowboarding ) than the actions of general subjects, and an individual person can simultaneously take multiple actions ( , drinking tea and reading a newspaper while sitting in a chair ). These issues require a deeper understanding of human actions and the objects around them and in much richer ways than just the presence of the objects in the vicinity of a person in an image. Accurate recognition of human-object interaction can benefit numerous tasks in computer vision, such as action-specific image retrieval @cite_21 , caption generation @cite_6 , and question answering @cite_6 @cite_11 .
{ "cite_N": [ "@cite_26", "@cite_36", "@cite_21", "@cite_32", "@cite_6", "@cite_11" ], "mid": [ "2046589395", "2169393274", "1892016050", "2014788385", "2339712187", "" ], "abstract": [ "Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.", "Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information.", "Human actions capture a wide variety of interactions between people and objects. As a result, the set of possible actions is extremely large and it is difficult to obtain sufficient training examples for all actions. However, we could compensate for this sparsity in supervision by leveraging the rich semantic relationship between different actions. A single action is often composed of other smaller actions and is exclusive of certain others. We need a method which can reason about such relationships and extrapolate unobserved actions from known actions. Hence, we propose a novel neural network framework which jointly extracts the relationship between actions and uses them for training better action retrieval models. Our model incorporates linguistic, visual and logical consistency based cues to effectively identify these relationships. We train and test our model on a largescale image dataset of human actions. We show a significant improvement in mean AP compared to different baseline methods including the HEX-graph approach from [8].", "Recognition of human actions is usually addressed in the scope of video interpretation. Meanwhile, common human actions such as ''reading a book'', ''playing a guitar'' or ''writing notes'' also provide a natural description for many still images. In addition, some actions in video such as ''taking a photograph'' are static by their nature and may require recognition methods based on static cues only. Motivated by the potential impact of recognizing actions in still images and the little attention this problem has received in computer vision so far, we address recognition of human actions in consumer photographs. We construct a new dataset available at http: www.di.ens.fr willow research stillactions with seven classes of actions in 911 Flickr images representing natural variations of human actions in terms of camera view-point, human pose, clothing, occlusions and scene background. We study action recognition in still images using the state-of-the-art bag-of-features methods as well as their combination with the part-based Latent SVM approach of In particular, we investigate the role of background scene context and demonstrate that improved action recognition performance can be achieved by (i) combining the statistical and part-based representations, and (ii) integrating person-centric description with the background scene context. We show results on our newly collected dataset of seven common actions as well as demonstrate improved performance over existing methods on the datasets of and Yao and Fei-Fei.", "This paper proposes deep convolutional network models that utilize local and global context to make human activity label predictions in still images, achieving state-of-the-art performance on two recent datasets with hundreds of labels each. We use multiple instance learning to handle the lack of supervision on the level of individual person instances, and weighted loss to handle unbalanced training data. Further, we show how specialized features trained on these datasets can be used to improve accuracy on the Visual Question Answering (VQA) task, in the form of multiple choice fill-in-the-blank questions (Visual Madlibs). Specifically, we tackle two types of questions on person activity and person-object relationship and show improvements over generic features trained on the ImageNet classification task", "" ] }
1704.06960
2906586541
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.
The literature on learning decentralized multi-agent policies in general is considerably larger @cite_17 @cite_10 . This includes work focused on communication in multiagent settings @cite_12 and even communication using natural language messages @cite_22 . All of these approaches employ structured communication schemes with manually engineered messaging protocols; these are, in some sense, automatically interpretable, but at the cost of introducing considerable complexity into both training and inference.
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_12", "@cite_17" ], "mid": [ "", "2250281182", "2571175805", "2252033417" ], "abstract": [ "", "Conversational implicatures involve reasoning about multiply nested belief structures. This complexity poses significant challenges for computational models of conversation and cognition. We show that agents in the multi-agent DecentralizedPOMDP reach implicature-rich interpretations simply as a by-product of the way they reason about each other to maximize joint utility. Our simulations involve a reference game of the sort studied in psychology and linguistics as well as a dynamic, interactional scenario involving implemented artificial agents.", "Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer&#x2019;s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.", "Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other’s beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others’ belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance." ] }
1704.06960
2906586541
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.
Our evaluation in this paper investigates communication strategies that arise in a number of different games, including reference games and an extended-horizon driving game. Communication strategies for reference games were previously explored by , and , and reference games specifically featuring end-to-end communication protocols by . On the control side, a long line of work considers nonverbal communication strategies in multiagent policies @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1497716120" ], "abstract": [ "Legible motion — motion that communicates its intent to a human observer — is crucial for enabling seamless human-robot collaboration. In this paper, we propose a functional gradient optimization technique for autonomously generating legible motion. Our algorithm optimizes a legibility metric inspired by the psychology of action interpretation in humans, resulting in motion trajectories that purposefully deviate from what an observer would expect in order to better convey intent. A trust region constraint on the optimization ensures that the motion does not become too surprising or unpredictable to the observer. Our studies with novice users that evaluate the resulting trajectories support the applicability of our method and of such a trust region. They show that within the region, legibility as measured in practice does significantly increase. Outside of it, however, the trajectory becomes confusing and the users’ confidence in knowing the robot’s intent significantly decreases." ] }
1704.06960
2906586541
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.
Another group of related approaches focuses on the development of more general machinery for interpreting deep models in which messages have no explicit semantics. This includes both visualization techniques @cite_14 @cite_19 , and approaches focused on generating explanations in the form of natural language @cite_23 @cite_4 .
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_4", "@cite_23" ], "mid": [ "2752194699", "1849277567", "2950401034", "2949467366" ], "abstract": [ "Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVis, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. Long-term usage data after putting the tool online revealed great interest in the machine learning community.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of \"siamese cat\" and \"tiger cat\", we generate language that describes the \"siamese cat\" in a way that distinguishes it from \"tiger cat\". Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.", "Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. We propose a novel loss function based on sampling and reinforcement learning that learns to generate sentences that realize a global sentence property, such as class specificity. Our results on a fine-grained bird species classification dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
In image classification, top-down attention mechanism has been applied using different methods: sequential process, region proposal and control gates. Sequential process @cite_9 @cite_45 @cite_41 @cite_23 models image classification as a sequential decision. Thus attention can be applied similarly with above. This formulation allows end-to-end optimization using RNN and LSTM and can capture different kinds of attention in a goal-driven way.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_45", "@cite_23" ], "mid": [ "2950178297", "2951527505", "2952155606", "1850742715" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
Region proposal @cite_15 @cite_13 @cite_38 @cite_26 has been successfully adopted in image detection task. In image classification, an additional region proposal stage is added before feedforward classification. The proposed regions contain top information and are used for feature learning in the second stage. Unlike image detection whose region proposals rely on large amount of supervision, e.g. the ground truth bounding boxes or detailed segmentation masks @cite_25 , unsupervised learning @cite_4 is usually used to generate region proposals for image classification.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_4", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "", "2950557924", "1928906481", "2520951797", "1923115158", "2949150497" ], "abstract": [ "", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what).", "The field of object detection has seen dramatic performance improvements in the last few years. Most of these gains are attributed to bottom-up, feedforward ConvNet frameworks. However, in case of humans, top-down information, context and feedback play an important role in doing object detection. This paper investigates how we can incorporate top-down information and feedback in the state-of-the-art Faster R-CNN framework. Specifically, we propose to: (a) augment Faster R-CNN with a semantic segmentation network; (b) use segmentation for top-down contextual priming; (c) use segmentation to provide top-down iterative feedback using two stage training. Our results indicate that all three contributions improve the performance on object detection, semantic segmentation and region proposal generation.", "The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs) [13]. The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This strategy introduces artificial boundaries on the images and may impact the quality of the extracted features. Besides, the operations on the raw image domain require to compute thousands of networks on a single image, which is time-consuming. In this paper, we propose to exploit shape information via masking convolutional features. The proposal segments (e.g., super-pixels) are treated as masks on the convolutional feature maps. The CNN features of segments are directly masked out from these maps and used to train classifiers for recognition. We further propose a joint method to handle objects and “stuff” (e.g., grass, sky, water) in the same framework. State-of-the-art results are demonstrated on benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling computational speed.", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
Control gates have been extensively used in LSTM. In image classification with attention, control gates for neurones are updated with top information and have influence on the feedforward process during training @cite_42 @cite_40 . However, a new process, reinforcement learning @cite_40 or optimization @cite_42 is involved during the training step. Highway Network @cite_34 extends control gate to solve gradient degradation problem for deep convolutional neural network.
{ "cite_N": [ "@cite_40", "@cite_42", "@cite_34" ], "mid": [ "2172010943", "2221625691", "2950621961" ], "abstract": [ "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNets feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.", "While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to note that the human visual cortex generally contains more feedback than feedforward connections. In this paper, we will briefly introduce the background of feedbacks in the human visual cortex, which motivates us to develop a computational feedback mechanism in deep neural networks. In addition to the feedforward inference in traditional neural networks, a feedback loop is introduced to infer the activation status of hidden layer neurons according to the \"goal\" of the network, e.g., high-level semantic labels. We analogize this mechanism as \"Look and Think Twice.\" The feedback networks help better visualize and understand how deep neural networks work, and capture visual attention on expected objects, even in images with cluttered background and multiple objects. Experiments on ImageNet dataset demonstrate its effectiveness in solving tasks such as image classification and object localization.", "Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
However, recent advances of image classification focus on training feedforward convolutional neural networks using very deep" structure @cite_12 @cite_1 @cite_6 . The feedforward convolutional network mimics the bottom-up paths of human cortex. Various approaches have been proposed to further improve the discriminative ability of deep convolutional neural network. VGG @cite_12 , Inception @cite_1 and residual learning @cite_6 are proposed to train very deep neural networks. Stochastic depth @cite_17 , Batch Normalization @cite_32 and Dropout @cite_27 exploit regularization for convergence and avoiding overfitting and degradation.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_6", "@cite_27", "@cite_12", "@cite_17" ], "mid": [ "2950179405", "2949117887", "2949650786", "2095705004", "1686810756", "2949892913" ], "abstract": [ "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10)." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
Soft attention developed in recent work @cite_19 @cite_22 can be trained end-to-end for convolutional network. Our Residual Attention Network incorporates the soft attention in fast developing feedforward network structure in an innovative way. Recent proposed spatial transformer module @cite_39 achieves state-of-the-art results on house number recognition task. A deep network module capturing top information is used to generate affine transformation. The affine transformation is applied to the input image to get attended region and then feed to another deep network module. The whole process can be trained end-to-end by using differentiable network layer which performs spatial transformation. Attention to scale @cite_19 uses soft attention as a scale selection mechanism and gets state-of-the-art results in image segmentation task.
{ "cite_N": [ "@cite_19", "@cite_22", "@cite_39" ], "mid": [ "2158865742", "", "2951005624" ], "abstract": [ "Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.", "", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations." ] }
1704.06904
2609476118
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
The design of soft attention structure in our Residual Attention Network is inspired by recent development of localization oriented task, segmentation @cite_31 @cite_3 @cite_10 and human pose estimation @cite_37 . These tasks motivate researchers to explore structure with fined-grained feature maps. The frameworks tend to cascade a bottom-up and a top-down structure. The bottom-up feedforward structure produces low resolution feature maps with strong semantic information. After that, a top-down network produces dense features to inference on each pixel. Skip connection @cite_31 is employed between bottom and top feature maps and achieved state-of-the-art result on image segmentation. The recent stacked hourglass network @cite_37 fuses information from multiple scales to predict human pose, and benefits from encoding both global and local information.
{ "cite_N": [ "@cite_31", "@cite_10", "@cite_37", "@cite_3" ], "mid": [ "2952632681", "360623563", "2950762923", "2952637581" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network." ] }
1704.06493
2963540250
We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters &#x03B2; (the interaction) and &#x03BB; (the external field), except for the case |&#x03BB;|=1 (the zero-field case). A randomized algorithm (FPRAS) for all graphs, and all &#x03B2;,&#x03BB; has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the decay of correlations property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.
The problem of computing partition functions has been widely studied, not only in statistical physics but also in combinatorics, because the partition function is often a generating function for combinatorial objects (cuts, in the case of the Ising model). There has been much progress on , which attempt to completely classify these problems as being either #P-hard or computable (exactly) in polynomial time (see, e.g., @cite_7 @cite_4 ).
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2092436554", "2005716876" ], "abstract": [ "Partition functions, also known as homomorphism functions, form a rich family of graph invariants that contain combinatorial invariants such as the number of @math -colorings or the number of independent sets of a graph and also the partition functions of certain “spin glass” models of statistical physics such as the Ising model. Building on earlier work by Dyer and Greenhill [Random Structures Algorithms, 17 (2000), pp. 260-289] and Bulatov and Grohe [Theoret. Comput. Sci., 348 (2005), pp. 148-186], we completely classify the computational complexity of partition functions. Our main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete. Partition functions are described by symmetric matrices with real entries, and we prove that it is decidable in polynomial time in terms of the matrix whether a given partition function is in polynomial time or #P-complete. While in general it is very complicated to give an explicit algebraic or combinatorial description of the tractable cases, for partition functions described by Hadamard matrices (these turn out to be central in our proofs) we obtain a simple algebraic tractability criterion, which says that the tractable cases are those “representable” by a quadratic polynomial over the field @math .", "Graph homomorphism problem has been studied intensively. Given an m × m symmetric matrix A, the graph homomorphism function ZA(G) is defined as ZA(G) = σξ:V →[m] (u,v)∈E ΠAξ(u),ξ(v), where G = (V,E) is any undirected graph. The function ZA(G) can encode many interesting graph properties, including counting vertex covers and k-colorings. We study the computational complexity of ZA(G), for arbitrary complex valued symmetric matrices A. Building on the work by Dyer and Greenhill [1], Bulatov and Grohe [2], and especially the recent beautiful work by Goldberg, Grohe, Jerrum and Thurley [3], we prove a complete dichotomy theorem for this problem." ] }
1704.06493
2963540250
We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters &#x03B2; (the interaction) and &#x03BB; (the external field), except for the case |&#x03BB;|=1 (the zero-field case). A randomized algorithm (FPRAS) for all graphs, and all &#x03B2;,&#x03BB; has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the decay of correlations property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.
Since the problems are in fact #P-hard in most cases, algorithmic interest has focused largely on approximation , motivated also by the general observation that approximating the partition function is polynomial time equivalent to sampling (approximately) from the underlying Gibbs distribution @cite_49 . In fact, most early approximation algorithms exploited this connection, and gave fully-polynomial randomized approximation schemes (FPRAS) for the partition function using Markov chain Monte Carlo (MCMC) samplers for the Gibbs distribution. In particular, for the ferromagnetic Ising model, the MCMC-based algorithm of Jerrum and Sinclair @cite_47 is valid for all positive real values of @math and for all graphs, irrespective of their vertex degrees. (For the connection with random sampling in this case, see @cite_9 .) This was later extended to ferromagnetic two-spin systems by Goldberg, Jerrum and Paterson @cite_33 . Similar techniques have been applied recently to the related random-cluster model by Guo and Jerrum @cite_20 .
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_49", "@cite_47", "@cite_20" ], "mid": [ "2122772637", "2038383704", "2051283034", "2024710200", "2963060913" ], "abstract": [ "The subject of this article is spin-systems as studied in statistical physics. We focus on the case of two spins. This case encompasses models of physical interest, such as the classical Ising model (ferromagnetic or antiferromagnetic, with or without an applied magnetic field) and the hard-core gas model. There are three degrees of freedom, corresponding to our parameters beta, gamma and mu. We wish to study the complexity of (approximately) computing the partition function in terms of these parameters. We pay special attention to the symmetric case mu=1 for which our results are depicted in Figure 1. Exact computation of the partition function Z is NP-hard except in the trivial case beta gamma=1, so we concentrate on the issue of whether Z can be computed within small relative error in polynomial time. We show that there is a fully polynomial randomised approximation scheme (FPRAS) for the partition function in the \"ferromagnetic\" region beta gamma >= 1, but (unless RP=NP) there is no FPRAS in the \"antiferromagnetic\" region corresponding to the square defined by 0<beta<1 and 0<gamma<1. Neither of these \"natural\" regions --- neither the hyperbola nor the square --- marks the boundary between tractable and intractable. In one direction, we provide an FPRAS for the partition function within a region which extends well away from the hyperbola. In the other direction, we exhibit two tiny, symmetric, intractable regions extending beyond the antiferromagnetic region. We also extend our results to the asymmetric case mu not equal to 1.", "", "Abstract The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of ‘efficiently verifiable’ combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity.", "The paper presents a randomised algorithm which evaluates the partition function of an arbitrary ferromagnetic Ising system to any specified degree of accuracy. The running time of the algorithm in...", "We show for the first time that the mixing time of Glauber (single edge update) dynamics for the random cluster model at q = 2 is bounded by a polynomial in the size of the underlying graph. As a consequence, the Swendsen-Wang algorithm for the ferromagnetic Ising model at any temperature has the same polynomial mixing time bound." ] }
1704.06493
2963540250
We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters &#x03B2; (the interaction) and &#x03BB; (the external field), except for the case |&#x03BB;|=1 (the zero-field case). A randomized algorithm (FPRAS) for all graphs, and all &#x03B2;,&#x03BB; has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the decay of correlations property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.
Much detailed work has been done on MCMC for Ising spin configurations for several important classes of graphs, including two-dimensional lattices (e.g., @cite_44 @cite_28 @cite_11 ), random graphs and graphs of bounded degree (e.g., @cite_26 ), the complete graph (e.g., @cite_1 ) and trees (e.g., @cite_31 @cite_17 ); we do not attempt to give a comprehensive summary of this line of work here.
{ "cite_N": [ "@cite_26", "@cite_28", "@cite_1", "@cite_17", "@cite_44", "@cite_31", "@cite_11" ], "mid": [ "2090216341", "", "2963496965", "1965879843", "2011373957", "2135481845", "2079416076" ], "abstract": [ "We establish tight results for rapid mixing of Gibbs samplers for the Ferromagnetic Ising model on general graphs. We show that if (d−1) tanh β < 1, then there exists a constant C such that the discrete time mixing time of Gibbs samplers for the ferromagnetic Ising model on any graph of n vertices and maximal degree d, where all interactions are bounded by β, and arbitrary external fields are bounded by Cnlogn. Moreover, the spectral gap is uniformly bounded away from 0 for all such graphs, as well as for infinite graphs of maximal degree d. We further show that when dtanh β < 1, with high probability over the Erdős–Renyi random graph G(n,d n), it holds that the mixing time of Gibbs samplers is n1+Θ(1 loglogn). Both results are tight, as it is known that the mixing time for random regular and Erdős–Renyi random graphs is, with high probability, exponential in n when (d−1) tanh β> 1, and dtanh β>1, respectively. To our knowledge our results give the first tight sufficient conditions for rapid mixing of spin systems on general graphs. Moreover, our results are the first rigorous results establishing exact thresholds for dynamics on random graphs in terms of spatial thresholds on trees.", "", "Introduction Statement of the results Mixing time preliminaries Outline of the proof of Theorem 2.1 Random graph estimates Supercritical case Subcritical case Critical case Fast mixing of the Swendsen-Wang process on trees Acknowledgements Bibliography", "We give the first comprehensive analysis of the effect of boundary conditions on the mixing time of the Glauber dynamics in the so-called Bethe approximation. Specifically, we show that the spectral gap and the log-Sobolev constant of the Glauber dynamics for the Ising model on an n-vertex regular tree with (+)-boundary are bounded below by a constant independent of n at all temperatures and all external fields. This implies that the mixing time is O(logn) (in contrast to the free boundary case, where it is not bounded by any fixed polynomial at low temperatures). In addition, our methods yield simpler proofs and stronger results for the spectral gap and log-Sobolev constant in the regime where the mixing time is insensitive to the boundary condition. Our techniques also apply to a much wider class of models, including those with hard-core constraints like the antiferromagnetic Potts model at zero temperature (proper colorings) and the hard–core lattice gas (independent sets).", "Various finite volume mixing conditions in classical statistical mechanics are reviewed and critically analyzed. In particular somefinite size conditions are discussed, together with their implications for the Gibbs measures and for the approach to equilibrium of Glauber dynamics inarbitrarily large volumes. It is shown that Dobrushin-Shlosman's theory ofcomplete analyticity and its dynamical counterpart due to Stroock and Zegarlinski, cannot be applied, in general, to the whole one phase region since it requires mixing properties for regions ofarbitrary shape. An alternative approach, based on previous ideas of Oliveri, and Picco, is developed, which allows to establish results on rapid approach to equilibrium deeply inside the one phase region. In particular, in the ferromagnetic case, we considerably improve some previous results by Holley and Aizenman and Holley. Our results are optimal in the sene that, for example, they show for the first time fast convergence of the dynamicsfor any temperature above the critical one for thed-dimensional Ising model with or without an external field. In part II we extensively consider the general case (not necessarily attractive) and we develop a new method, based on renormalizations group ideas and on an assumption of strong mixing in a finite cube, to prove hypercontractivity of the Markov semigroup of the Glauber dynamics.", "We study discrete time Glauber dynamics for random configurations with local constraints (e.g. proper coloring, Ising and Potts models) on finite graphs with n vertices and of bounded degree. We show that the relaxation time (defined as the reciprocal of the spectral gap 1- spl lambda sub 2 ) for the dynamics on trees and on certain hyperbolic graphs, is polynomial in n. For these hyperbolic graphs, this yields a general polynomial sampling algorithm for random configurations. We then show that if the relaxation time spl tau sub 2 satisfies spl tau sub 2 =O(n), then the correlation coefficient, and the mutual information, between any local function (which depends only on the configuration in a fixed window) and the boundary conditions, decays exponentially in the distance between the window and the boundary. For the Ising model on a regular tree, this condition is sharp.", "The Ising model is widely regarded as the most studied model of spin-systems in statistical physics. The focus of this paper is its dynamic (stochastic) version, the Glauber dynamics, introduced in 1963 and by now the most popular means of sampling the Ising measure. Intensive study throughout the last three decades has yielded a rigorous understanding of the spectral-gap of the dynamics on ( Z ^2 ) everywhere except at criticality. While the critical behavior of the Ising model has long been the focus for physicists, mathematicians have only recently developed an understanding of its critical geometry with the advent of SLE, CLE and new tools to study conformally invariant systems." ] }
1704.06493
2963540250
We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters &#x03B2; (the interaction) and &#x03BB; (the external field), except for the case |&#x03BB;|=1 (the zero-field case). A randomized algorithm (FPRAS) for all graphs, and all &#x03B2;,&#x03BB; has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the decay of correlations property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.
In a parallel line of work, Barvinok initiated the study of Taylor approximation of the logarithm of the partition function, which led to quasipolynomial time approximation algorithms for a variety of counting problems @cite_16 @cite_42 @cite_35 @cite_27 . More recently, Patel and Regts @cite_5 showed that for several models that can be written as induced subgraph sums, one can actually obtain an FPTAS from this approach. In particular, for problems such as counting independent sets with negative (or, more generally, complex valued) activities on bounded degree graphs, they were able to match the range of applicability of existing algorithms based on correlation decay, and were also able to extend the approach to Tutte polynomials and edge-coloring models (also known as Holant problems) where little is known about correlation decay.
{ "cite_N": [ "@cite_35", "@cite_42", "@cite_27", "@cite_5", "@cite_16" ], "mid": [ "", "2963364787", "2962992138", "2474151873", "1997999115" ], "abstract": [ "", "We present a deterministic algorithm which, given a graph G with n vertices and an integer 1 0 is an absolute constant. This allows us to tell apart the graphs that do not have m-subsets of high density from the graphs that have sufficiently many m-subsets of high density, even when the probability to hit such a subset at random is exponentially small in m. ACM Classification: F.2.1, G.1.2, G.2.2, I.1.2 AMS Classification: 15A15, 68C25, 68W25, 60C05", "Abstract We consider a refinement of the partition function of graph homomorphisms and present a quasi-polynomial algorithm to compute it in a certain domain. As a corollary, we obtain quasi-polynomial algorithms for computing partition functions for independent sets, perfect matchings, Hamiltonian cycles and dense subgraphs in graphs as well as for graph colorings. This allows us to tell apart in quasi-polynomial time graphs that are sufficiently far from having a structure of a given type (i.e., independent set of a given size, Hamiltonian cycle, etc.) from graphs that have sufficiently many structures of that type, even when the probability to hit such a structure at random is exponentially small.", "In this paper we show a new way of constructing deterministic polynomial-time approximation algorithms for computing complex-valued evaluations of a large class of graph polynomials on bounded degree graphs. In particular, our approach works for the Tutte polynomial and independence polynomial, as well as partition functions of complex-valued spin and edge-coloring models. More specifically, we define a large class of graph polynomials @math and show that if @math and there is a disk @math centered at zero in the complex plane such that @math does not vanish on @math for all bounded degree graphs @math , then for each @math in the interior of @math there exists a deterministic polynomial-time approximation algorithm for evaluating @math at @math . This gives an explicit connection between absence of zeros of graph polynomials and the existence of efficient approximation algorithms, allowing us to show new relationships between well-known conjectures. Our work builds on a recent line of work initiated b...", "We present a deterministic algorithm, which, for any given @math 0<∈<1 and an @math n×n real or complex matrix @math A=aij such that @math aij-1≤0.19 for all @math i,j computes the permanent of @math A within relative error @math ∈ in @math nOlnn-ln∈ time. The method can be extended to computing hafnians and multidimensional permanents." ] }
1704.06738
2609354095
Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5 in most cases.
The goal of ML is to learn models from training datasets, and use them to make predictions on new data. To handle big training datasets and big models, many distributed ML systems have been proposed based on the PS framework. As shown in Figure , the PS framework can scale to large cluster deployment by having worker nodes performing data-parallel computation, and having server nodes maintaining globally shared of ML models. Each worker node contains a TaskScheduler to place tasks on the local node based on a specific policy, such as Bulk Synchronous Parallel (BSP) or Stale Synchronous Parallel (SSP) @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2579247884" ], "abstract": [ "What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100 s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, showing that Petuum allows ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters." ] }
1704.06738
2609354095
Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5 in most cases.
CMSs are designed to run multiple DCSs in a single cluster. As shown in Figure , existing CMSs can be classified into six categories based on their cluster management strategies. These approaches could perform resource allocation at three levels: DCS, application and task. Resource allocation refers to determining the amount of resources offered to applications, and selecting specific resources from servers to satisfy user-supplied placement preferences @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "2160121678" ], "abstract": [ "Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability. We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47 in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types." ] }
1704.06738
2609354095
Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5 in most cases.
, such as Yarn @cite_0 , Quasar @cite_8 and Borg @cite_14 , use a centralized resource manager to perform resource allocation for all applications with cluster-wide visibility.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_8" ], "mid": [ "2105947650", "2141992894", "2160121678" ], "abstract": [ "The initial design of Apache Hadoop [1] was tightly focused on running massive, MapReduce jobs to process a web crawl. For increasingly diverse companies, Hadoop has become the data and computational agora---the de facto place where data and computational resources are shared and accessed. This broad adoption and ubiquitous usage has stretched the initial design well beyond its intended target, exposing two key shortcomings: 1) tight coupling of a specific programming model with the resource management infrastructure, forcing developers to abuse the MapReduce programming model, and 2) centralized handling of jobs' control flow, which resulted in endless scalability concerns for the scheduler. In this paper, we summarize the design, development, and current state of deployment of the next generation of Hadoop's compute platform: YARN. The new architecture we introduced decouples the programming model from the resource management infrastructure, and delegates many scheduling functions (e.g., task fault-tolerance) to per-application components. We provide experimental evidence demonstrating the improvements we made, confirm improved efficiency by reporting the experience of running YARN on production environments (including 100 of Yahoo! grids), and confirm the flexibility claims by discussing the porting of several programming frameworks onto YARN viz. Dryad, Giraph, Hoya, Hadoop MapReduce, REEF, Spark, Storm, Tez.", "Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.", "Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability. We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47 in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types." ] }
1704.06738
2609354095
Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5 in most cases.
, such as Mesos @cite_13 , use a central cluster resource manager and application-specific schedulers to jointly perform resource allocation. The central manager gives each application a set of resource offers, and let the application-specific scheduler decide whether to accept them.
{ "cite_N": [ "@cite_13" ], "mid": [ "2163961697" ], "abstract": [ "We present Mesos, a platform for sharing commodity clusters between multiple diverse cluster computing frameworks, such as Hadoop and MPI. Sharing improves cluster utilization and avoids per-framework data replication. Mesos shares resources in a fine-grained manner, allowing frameworks to achieve data locality by taking turns reading data stored on each machine. To support the sophisticated schedulers of today's frameworks, Mesos introduces a distributed two-level scheduling mechanism called resource offers. Mesos decides how many resources to offer each framework, while frameworks decide which resources to accept and which computations to run on them. Our results show that Mesos can achieve near-optimal data locality when sharing the cluster among diverse frameworks, can scale to 50,000 (emulated) nodes, and is resilient to failures." ] }
1704.06738
2609354095
Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5 in most cases.
let each application maintain a copy of the cluster state, and compete for resources using lock-free optimistic concurrency control, as in Omega @cite_15 and Apollo @cite_11 . These approaches could offer high resource allocation quality without strict fairness guarantees due to the lack of centralized resource management. , such as Sparrow @cite_9 , use many independent resource managers to serve applications' resource requests with local, partial and stale cluster state. This approach can achieve millisecond scheduling latency per request. combine distributed resource managers with a centralized cluster scheduler, as in Hawk @cite_3 and Mercury @cite_10 . Applications can obtain strong execution guarantees from the centralized scheduler, or trade strict guarantees for millisecond scheduling latency from distributed managers.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2102849319", "2248732043", "2120422789", "1471655812", "1464363888" ], "abstract": [ "Large-scale data analytics frameworks are shifting towards shorter task durations and larger degrees of parallelism to provide low latency. Scheduling highly parallel jobs that complete in hundreds of milliseconds poses a major challenge for task schedulers, which will need to schedule millions of tasks per second on appropriate machines while offering millisecond-level latency and high availability. We demonstrate that a decentralized, randomized sampling approach provides near-optimal performance while avoiding the throughput and availability limitations of a centralized design. We implement and deploy our scheduler, Sparrow, on a 110-machine cluster and demonstrate that Sparrow performs within 12 of an ideal scheduler.", "This paper addresses the problem of efficient scheduling of large clusters under high load and heterogeneous workloads. A heterogeneous workload typically consists of many short jobs and a small number of large jobs that consume the bulk of the cluster's resources. Recent work advocates distributed scheduling to overcome the limitations of centralized schedulers for large clusters with many competing jobs. Such distributed schedulers are inherently scalable, but may make poor scheduling decisions because of limited visibility into the overall resource usage in the cluster. In particular, we demonstrate that under high load, short jobs can fare poorly with such a distributed scheduler. We propose instead a new hybrid centralized distributed scheduler, called Hawk. In Hawk, long jobs are scheduled using a centralized scheduler, while short ones are scheduled in a fully distributed way. Moreover, a small portion of the cluster is reserved for the use of short jobs. In order to compensate for the occasional poor decisions made by the distributed scheduler, we propose a novel and efficient randomized work-stealing algorithm. We evaluate Hawk using a trace-driven simulation and a prototype implementation in Spark. In particular, using a Google trace, we show that under high load, compared to the purely distributed Sparrow scheduler, Hawk improves the 50th and 90th percentile runtimes by 80 and 90 for short jobs and by 35 and 10 for long jobs, respectively. Measurements of a prototype implementation using Spark on a 100-node cluster confirm the results of the simulation.", "Increasing scale and the need for rapid response to changing requirements are hard to meet with current monolithic cluster scheduler architectures. This restricts the rate at which new features can be deployed, decreases efficiency and utilization, and will eventually limit cluster growth. We present a novel approach to address these needs using parallelism, shared state, and lock-free optimistic concurrency control. We compare this approach to existing cluster scheduler designs, evaluate how much interference between schedulers occurs and how much it matters in practice, present some techniques to alleviate it, and finally discuss a use case highlighting the advantages of our approach -- all driven by real-life Google production workloads.", "Datacenter-scale computing for analytics workloads is increasingly common. High operational costs force heterogeneous applications to share cluster resources for achieving economy of scale. Scheduling such large and diverse workloads is inherently hard, and existing approaches tackle this in two alternative ways: 1) centralized solutions offer strict, secure enforcement of scheduling invariants (e.g., fairness, capacity) for heterogeneous applications, 2) distributed solutions offer scalable, efficient scheduling for homogeneous applications. We argue that these solutions are complementary, and advocate a blended approach. Concretely, we propose Mercury, a hybrid resource management framework that supports the full spectrum of scheduling, from centralized to distributed. Mercury exposes a programmatic interface that allows applications to trade-off between scheduling overhead and execution guarantees. Our framework harnesses this flexibility by opportunistically utilizing resources to improve task throughput. Experimental results on production-derived workloads show gains of over 35 in task throughput. These benefits can be translated by appropriate application and framework policies into job throughput or job latency improvements. We have implemented and contributed Mercury as an extension of Apache Hadoop YARN.", "Efficiently scheduling data-parallel computation jobs over cloud-scale computing clusters is critical for job performance, system throughput, and resource utilization. It is becoming even more challenging with growing cluster sizes and more complex workloads with diverse characteristics. This paper presents Apollo, a highly scalable and coordinated scheduling framework, which has been deployed on production clusters at Microsoft to schedule thousands of computations with millions of tasks efficiently and effectively on tens of thousands of machines daily. The framework performs scheduling decisions in a distributed manner, utilizing global cluster information via a loosely coordinated mechanism. Each scheduling decision considers future resource availability and optimizes various performance and system factors together in a single unified model. Apollo is robust, with means to cope with unexpected system dynamics, and can take advantage of idle system resources gracefully while supplying guaranteed resources when needed." ] }
1704.06360
2609602893
We present SwellShark, a framework for building biomedical named entity recognition (NER) systems quickly and without hand-labeled data. Our approach views biomedical resources like lexicons as function primitives for autogenerating weak supervision. We then use a generative model to unify and denoise this supervision and construct large-scale, probabilistically labeled datasets for training high-accuracy NER taggers. In three biomedical NER tasks, SwellShark achieves competitive scores with state-of-the-art supervised benchmarks using no hand-labeled training data. In a drug name extraction task using patient medical records, one domain expert using SwellShark achieved within 5.1 of a crowdsourced annotation approach -- which originally utilized 20 teams over the course of several weeks -- in 24 hours.
Leveraging existing resources to heuristically label data has received considerable research interest. Distant supervision @cite_11 @cite_7 uses knowledge bases to supervise relation extraction tasks. Recent methods incorporate more generalized knowledge into extraction systems. used Markov Logic Networks to encode commonsense domain knowledge like home teams are more likely to win a game" and generate weak training examples. is informed by these methods, but uses a generative model to unify and model noise across different supervision sources.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2107598941", "1954715867" ], "abstract": [ "Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.", "Recently, there has been much effort in making databases for Inolecular biology more accessible osld interoperable. However, information in text. form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text. sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task -a statistical text classification method, and a relational learning method -and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data." ] }
1704.06393
2608870981
Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.
The recently proposed neural machine translation has drawn more and more attention. Most of the existing approaches and models mainly focus on designing better attention models @cite_28 @cite_30 @cite_37 @cite_31 @cite_22 , better strategies for handling rare and unknown words @cite_36 @cite_24 @cite_25 @cite_20 , exploiting large-scale monolingual data @cite_26 @cite_2 @cite_9 , and integrating SMT techniques @cite_14 @cite_6 @cite_33 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_14", "@cite_22", "@cite_33", "@cite_28", "@cite_36", "@cite_9", "@cite_6", "@cite_24", "@cite_2", "@cite_31", "@cite_25", "@cite_20" ], "mid": [ "", "", "2422843715", "2195405088", "2534200568", "2566564022", "2949335953", "", "", "2951973027", "2577335011", "2963216553", "2410539690", "2542860122", "1816313093" ], "abstract": [ "", "", "While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semi-supervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the source-to-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the Chinese-English dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems.", "We propose minimum risk training for end-to-end neural machine translation. Unlike conventional maximum likelihood estimation, minimum risk training is capable of optimizing model parameters directly with respect to arbitrary evaluation metrics, which are not necessarily differentiable. Experiments show that our approach achieves significant improvements over maximum likelihood estimation on a state-of-the-art neural machine translation system across various languages pairs. Transparent to architectures, our approach can be applied to more neural networks and potentially benefit more NLP tasks.", "", "Neural machine translation (NMT) conducts end-to-end translation with a source language encoder and a target language decoder, making promising translation performance. However, as a newly emerged approach, the method has some limitations. An NMT system usually has to apply a vocabulary of certain size to avoid the time-consuming training and decoding, thus it causes a serious out-of-vocabulary problem. Furthermore, the decoder lacks a mechanism to guarantee all the source words to be translated and usually favors short translations, resulting in fluent but inadequate translations. In order to solve the above problems, we incorporate statistical machine translation (SMT) features, such as a translation model and an n-gram language model, with the NMT model under the log-linear framework. Our experiments show that the proposed method significantly improves the translation quality of the state-of-the-art NMT system on Chinese-to-English translation tasks. Our method produces a gain of up to 2.33 BLEU score on NIST open test sets.", "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.", "", "", "This paper describes the AMU-UEDIN submissions to the WMT 2016 shared task on news translation. We explore methods of decode-time integration of attention-based neural translation models with phrase-based statistical machine translation. Efficient batch-algorithms for GPU-querying are proposed and implemented. For English-Russian, our system stays behind the state-of-the-art pure neural models in terms of BLEU. Among restricted systems, manual evaluation places it in the first cluster tied with the pure neural model. For the Russian-English task, our submission achieves the top BLEU result, outperforming the best pure neural system by 1.1 BLEU points and our own phrase-based baseline by 1.6 BLEU. After manual evaluation, this system is the best restricted system in its own cluster. In follow-up experiments we improve results by additional 0.8 BLEU.", "Neural Machine translation has shown promising results in recent years. In order to control the computational complexity, NMT has to employ a small vocabulary, and massive rare words outside the vocabulary are all replaced with a single unk symbol. Besides the inability to translate rare words, this kind of simple approach leads to much increased ambiguity of the sentences since meaningless unks break the structure of sentences, and thus hurts the translation and reordering of the in-vocabulary words. To tackle this problem, we propose a novel substitution-translation-restoration method. In substitution step, the rare words in a testing sentence are replaced with similar in-vocabulary words based on a similarity model learnt from monolingual data. In translation and restoration steps, the sentence will be translated with a model trained on new bilingual data with rare words replaced, and finally the translations of the replaced words will be substituted by that of original ones. Experiments on Chinese-to-English translation demonstrate that our proposed method can achieve more than 4 BLEU points over the attention-based NMT. When compared to the recently proposed method handling rare words in NMT, our method can also obtain an improvement by nearly 3 BLEU points.", "", "Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.", "Neural Machine Translation (NMT) has become the new state-of-the-art in several language pairs. However, it remains a challenging problem how to integrate NMT with a bilingual dictionary which mainly contains words rarely or never seen in the bilingual training data. In this paper, we propose two methods to bridge NMT and the bilingual dictionaries. The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena. One method leverages a mixed word character model and the other attempts at synthesizing parallel sentences guaranteeing massive occurrence of the translation lexicon. Extensive experiments demonstrate that the proposed methods can remarkably improve the translation quality, and most of the rare words in the test sentences can obtain correct translations if they are covered by the dictionary.", "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively." ] }
1704.06393
2608870981
Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.
Our focus in this work is aiming to take advantage of NMT and SMT by system combination, which attempts to find consensus translations among different machine translation systems. In past several years, word-level, phrase-level and sentence-level system combination methods were well studied @cite_4 @cite_34 @cite_29 @cite_27 @cite_3 @cite_1 @cite_35 @cite_0 , and reported state-of-the-art performances in benchmarks for SMT. Here, we propose a neural system combination model which combines the advantages of NMT and SMT efficiently.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_29", "@cite_1", "@cite_3", "@cite_0", "@cite_27", "@cite_34" ], "mid": [ "2250849168", "1483849869", "", "", "", "2475913055", "2387720973", "2117139364" ], "abstract": [ "In this paper, we propose a paraphrasing model to address the task of system combination for machine translation. We dynamically learn hierarchical paraphrases from target hypotheses and form a synchronous context-free grammar to guide a series of transformations of target hypotheses into fused translations. The model is able to exploit phrasal and structural system-weighted consensus and also to utilize existing information about word ordering present in the target hypotheses. In addition, to consider a diverse set of plausible fused translations, we develop a hybrid combination architecture, where we paraphrase every target hypothesis using different fusing techniques to obtain fused translations for each target, and then make the final selection among all fused translations. Our experimental results show that our approach can achieve a significant improvement over combination baselines.", "We address the problem of computing a consensus translation given the outputs from a set of machine translation (MT) systems. The translations from the MT systems are aligned with a multiple string alignment algorithm and the consensus translation is then computed. We describe the multiple string alignment algorithm and the consensus MT hypothesis computation. We report on the subjective and objective performance of the multilingual acquisition approach on a limited domain spoken language application. We evaluate five domain-independent off-the-shelf MT systems and show that the consensus-based translation performance is equal to or better than any of the given MT systems, in terms of both objective and subjective measures.", "", "", "", "In this paper, we propose to enhance machine translation system combination (MTSC) with a sentence-level paraphrasing model trained by a neural network. This work extends the number of candidates in MTSC by paraphrasing the whole original MT translation sentences. First we train a neural paraphrasing model of Encoder-Decoder, and leverage the model to paraphrase the MT system outputs to generate synonymous candidates in the semantic space. Then we merge all of them into a single improved translation by a state-of-the-art system combination approach (MEMT) adding some new paraphrasing features. Our experimental results show a significant improvement of 0.28 BLEU points on the WMT2011 test data and 0.41 BLEU points without considering the out-of-vocabulary (OOV) words for the sentence-level paraphrasing model.", "This paper reports on the participation of CASIA (Institute of Automation Chinese Academy of Sciences) at the evaluation campaign of the International Workshop on Spoken Language Translation 2009. We participated in the challenge tasks for Chinese-toEnglish and English-to-Chinese translation respectively and the BTEC task for Chinese-to-English translation only. For all of the tasks, system performance is improved with some special methods as follows: 1) combining different results of Chinese word segmentation, 2) combining different results of word alignments, 3) adding reliable bilingual words with high probabilities to the training data, 4) handling named entities including person names, location names, organization names, temporal and numerical expressions additionally, 5) combining and selecting translations from the outputs of multiple translation engines, 6) replacing Chinese character with Chinese Pinyin to train the translation model for Chinese-toEnglish ASR challenge task. This is a new approach that has never been introduced before.", "Confusion network decoding has been the most successful approach in combining outputs from multiple machine translation (MT) systems in the recent DARPA GALE and NIST Open MT evaluations. Due to the varying word order between outputs from different MT systems, the hypothesis alignment presents the biggest challenge in confusion network decoding. This paper describes an incremental alignment method to build confusion networks based on the translation edit rate (TER) algorithm. This new algorithm yields significant BLEU score improvements over other recent alignment methods on the GALE test sets and was used in BBN's submission to the WMT08 shared translation task." ] }
1704.06456
2951728675
Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
Social relation is a significant part of social network study @cite_15 @cite_10 @cite_42 . This section focuses on the related work in computer vision while the next section outlines different theories in the psychology literature.
{ "cite_N": [ "@cite_15", "@cite_42", "@cite_10" ], "mid": [ "1528113760", "2261926893", "2053415806" ], "abstract": [ "Part 1: Social Analysis, Discourse Analysis, Text Analysis 1. Introduction 2. Texts, Social Events, and Social Practices 3. Intertextuality and Assumptions Part 2: Genres and Action 4. Genres 5. Meaning Relations between Sentences and Clauses 6. Types of Exchange, Speech Functions, and Grammatical Mood Part 3: Discourses and Representations 7. Discourses 8. Representations of Social Events Part 4: Styles and Identities 9. Styles 10. Modality and Evaluation 11. Conclusion", "Photos are an important information carrier for implicit relationships. In this article, we introduce an image based social network, called CelebrityNet, built from implicit relationships encoded in a collection of celebrity images. We analyze the social properties reflected in this image-based social network and automatically infer communities among the celebrities. We demonstrate the interesting discoveries of the CelebrityNet. We particularly compare the inferred communities with human manually labeled ones and show quantitatively that the automatically detected communities are highly aligned with that of human interpretation. Inspired by the uniqueness of visual content and tag concepts within each community of the CelebrityNet, we further demonstrate that the constructed social network can serve as a knowledge base for high-level visual recognition tasks. In particular, this social network is capable of significantly improving the performance of automatic image annotation and classification of unknown images.", "We introduce a method for extracting the social network structure for the persons appearing in a set of video clips. Individuals are unknown, and are not matched against known enrollments. An identity cluster representing an individual is formed by grouping similar-appearing faces from different videos. Each identity cluster is represented by a node in the social network. Two nodes are linked if the faces from their clusters appeared together in one or more video frames. Our approach incorporates a novel active clustering technique to create more accurate identity clusters based on feedback from the user about ambiguously matched faces. The final output consists of one or more network structures that represent the social group(s), and a list of persons who potentially connect multiple social groups. Our results demonstrate the efficacy of the proposed clustering algorithm and network analysis techniques." ] }
1704.06456
2951728675
Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
Relationships among family members are the most basic social relations for human. There exist a large number of studies about family member recognition and kinship verification @cite_36 @cite_20 @cite_44 @cite_13 @cite_12 @cite_5 @cite_11 @cite_31 @cite_17 . Most of these works focus on familial relations: husband-wife, parents-children, siblings, grandparents-grandchildren. Researchers leverage certain visual patterns exhibited in these relations. For instance, for two people in a wife-husband relation, husband's face is usually in a higher position than wife's @cite_36 @cite_44 . Not only the location information but also the facial appearance, attributes and landmarks are essential features to verify family members. Dehghan al @cite_31 learn the optimal face features to answer the question of Do offspring resemble their parents?''. Singla al @cite_20 propose some attribute-related assumptions, , two people of similar age and opposite gender appearing together are spouses.
{ "cite_N": [ "@cite_36", "@cite_17", "@cite_44", "@cite_5", "@cite_31", "@cite_20", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "", "2186708012", "", "", "1972014290", "2118778308", "70933537", "1996337501", "" ], "abstract": [ "", "The group membership prediction (GMP) problem involves predicting whether or not a collection of instances share a certain semantic property. For instance, in kinship verification given a collection of images, the goal is to predict whether or not they share a familial relationship. In this context we propose a novel probability model and introduce latent view-specific and view-shared random variables to jointly account for the view-specific appearance and cross-view similarities among data instances. Our model posits that data from each view is independent conditioned on the shared variables. This postulate leads to a parametric probability model that decomposes group membership likelihood into a tensor product of data-independent parameters and data-dependent factors. We propose learning the data-independent parameters in a discriminative way with bilinear classifiers, and test our prediction algorithm on challenging visual recognition tasks such as multi-camera person re-identification and kinship verification. On most benchmark datasets, our method can significantly outperform the current state-of-the-art.", "", "", "Recent years have seen a major push for face recognition technology due to the large expansion of image sharing on social networks. In this paper, we consider the difficult task of determining parent-offspring resemblance using deep learning to answer the question \"Who do I look like?\" Although humans can perform this job at a rate higher than chance, it is not clear how they do it [2]. However, recent studies in anthropology [24] have determined which features tend to be the most discriminative. In this study, we aim to not only create an accurate system for resemblance detection, but bridge the gap between studies in anthropology with computer vision techniques. Further, we aim to answer two key questions: 1) Do offspring resemble their parents? and 2) Do offspring resemble one parent more than the other? We propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features to delineate parent-offspring relationships. We further analyze the correlation between our automatically detected features and those found in anthropological studies. Meanwhile, our method outperforms the state-of-the-art in kinship verification by 3-10 depending on the relationship using specific (father-son, mother-daughter, etc.) and generic models.", "We identify the social relationships between individuals in consumer photos. Consumer photos generally do not contain a random gathering of strangers but rather groups of friends and families. Detecting and identifying these relationships are important steps towards understanding consumer image collections. Similar to the approach that a human might use, we use a rule-based system to quantify the domain knowledge (e.g. children tend to be photographed more often than adults; parents tend to appear with their kids). The weight of each rule reflects its importance in the overall prediction model. Learning and inference are based on a sound mathematical formulation using the theory developed in the area of statistical relational models. In particular, we use the language called Markov Logic [14]. We evaluate our model using cross validation on a set of about 4500 photos collected from 13 different users. Our experiments show the potential of our approach by improving the accuracy (as well as other statistical measures) over a set of two different relationship prediction tasks when compared with different baselines. We conclude with directions for future work.", "This chapter studies the problem of identifying people in group pictures. That is, determining from a gallery of people who appear in a given picture. This is a well-studied problem that is becoming increasingly important given the recent explosion in usage of social networks. In this chapter we make two distinct contributions to this problem. First, we use novel kinship similarity to make better estimation of identity. Specifically, we use unary costs based on state-of-the-art face recognition algorithms and as pairwise cost we use the kinship similarity of the people in the image. Second, with these values we formulate a collection-specific MRF MAP estimation (labelling) problem and use existing MRF MAP estimation methods to solve it. To evaluate the proposed method, a family photo database is collected from the Internet. Experiments show that for group pictures of family members (family pictures) our method obtains the state-of-the-art performance, while performing competitively in nonfamily group pictures.", "There is an urgent need to organize and manage images of people automatically due to the recent explosion of such data on the Web in general and in social media in particular. Beyond face detection and face recognition, which have been extensively studied over the past decade, perhaps the most interesting aspect related to human-centered images is the relationship of people in the image. In this work, we focus on a novel solution to the latter problem, in particular the kin relationships. To this end, we constructed two databases: the first one named UB KinFace Ver2.0, which consists of images of children, their young parents and old parents, and the second one named FamilyFace. Next, we develop a transfer subspace learning based algorithm in order to reduce the significant differences in the appearance distributions between children and old parents facial images. Moreover, by exploring the semantic relevance of the associated metadata, we propose an algorithm to predict the most likely kin relationships embedded in an image. In addition, human subjects are used in a baseline study on both databases. Experimental results have shown that the proposed algorithms can effectively annotate the kin relationships among people in an image and semantic context can further improve the accuracy.", "" ] }
1704.06456
2951728675
Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
Based on the social domain definition in @cite_28 , familial relations between adults and offspring are in Attachment domain, for which attribute categories such as age, gender and emotion are essential cues. Sibling relation is categorized in Reciprocity domain, which shows more functional and appearance equality than Attachment domain. This is also consistent with the visual pattern of siblings.
{ "cite_N": [ "@cite_28" ], "mid": [ "2040658249" ], "abstract": [ "Proposing that the algorithms of social life are acquired as a domain-based process, the author offers distinctions between social domains preparing the individual for proximity-maintenance within a protective relationship (attachment domain), use and recognition of social dominance (hierarchical power domain), identification and maintenance of the lines dividing \"us\" and \"them\" (coalitional group domain), negotiation of matched benefits with functional equals (reciprocity domain), and selection and protection of access to sexual partners (mating domain). Flexibility in the implementation of domains occurs at 3 different levels: versatility at a bioecological level, variations in the cognitive representation of individual experience, and cultural and individual variations in the explicit management of social life. Empirical evidence for domain specificity was strongest for the attachment domain; supportive evidence was also found for the distinctiveness of the 4 other domains. Implications are considered at theoretical and applied levels." ] }
1704.06456
2951728675
Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
In social events, there are immediate social roles and inherent relations among participants. The notion of social roles'' here models the expected behaviors of certain people @cite_45 @cite_21 @cite_22 @cite_40 . For example, in a child birthday party, social roles are birthday child, parents, friends and guests @cite_45 . Instead of immediate roles, we focus on the identity-specified interpersonal relations, which naturally derive permanent social roles. For example, if leader and subordinate'' is confirmed, then it is easy to define the leader's social role as a manager boss which is much more permanent than the guest in a party''. More importantly, our social relation definition is based on psychology studies that suggest comprehensive social scopes in people's long life.
{ "cite_N": [ "@cite_40", "@cite_22", "@cite_45", "@cite_21" ], "mid": [ "2075285529", "2952390756", "1989004008", "2057067088" ], "abstract": [ "In this paper, we present a method for inferring social roles of agents (persons) from their daily activities in long surveillance video sequences. We define activities as interactions between an agent's position and semantic hotspots within the scene. Given a surveillance video, our method first tracks the locations of agents then automatically discovers semantic hotspots in the scene. By enumerating spatial temporal locations between an agent's feet and hotspots in a scene, we define a set of atomic actions, which in turn compose sub-events and events. The numbers and types of events performed by an agent are assumed to be driven by his her social role. With the grammar model induced by composition rules, an adapted Earley parser algorithm is used to parse the trajectories into events, sub-events and atomic actions. With probabilistic output of events, the roles of agents can be predicted under the Bayesian inference framework. Experiments are carried out on a challenging 8.5 hours video from a surveillance camera in the lobby of a research lab. The video contains 7 different social roles including “manager”, “researcher”, “developer”, “engineer”, “staff”, “visitor” and “mailman”. Results show that our proposed method can predict the role of each agent with high precision.", "With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.", "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.", "We present a hierarchical model for human activity recognition in entire multi-person scenes. Our model describes human behaviour at multiple levels of detail, ranging from low-level actions through to high-level events. We also include a model of social roles, the expected behaviours of certain people, or groups of people, in a scene. The hierarchical model includes these varied representations, and various forms of interactions between people present in a scene. The model is trained in a discriminative max-margin framework. Experimental results demonstrate that this model can improve performance at all considered levels of detail, on two challenging datasets." ] }
1704.06456
2951728675
Social relations are the foundation of human daily life. Developing techniques to analyze such relations from visual data bears great potential to build machines that better understand us and are capable of interacting with us at a social level. Previous investigations have remained partial due to the overwhelming diversity and complexity of the topic and consequently have only focused on a handful of social relations. In this paper, we argue that the domain-based theory from social psychology is a great starting point to systematically approach this problem. The theory provides coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations included in each domain. We provide the first dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations. We also contribute the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performance of the attribute based approach, we also find interpretable features that are in accordance with the predictions from social psychology literature. Beyond our findings, we believe that our contributions more tightly interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.
Social life endows various social appearances to people. Some research focus on urban tribes in daily life @cite_6 , social categories defined by Wikipedia @cite_0 @cite_1 , and popular groups such as Loli'', Syota'' and Goddess'' which are mostly derived from social networks @cite_32 . These fine-grained categorization uses body face positions and attributes such as age, face appearance, hair style, clothing style and so on. Occupation recognition studies @cite_29 @cite_14 not only use personal attributes but also leverage the contextual information in a semantic level, , a waiter is more likely to stand beside sitting consumers in a restaurant.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_1", "@cite_32", "@cite_6", "@cite_0" ], "mid": [ "2134007045", "2155171143", "2043006422", "1925450849", "", "1993636005" ], "abstract": [ "In this paper, we investigate the problem of recognizing occupations of multiple people with arbitrary poses in a photo. Previous work utilizing single person's nearly frontal clothing information and fore background context preliminarily proves that occupation recognition is computationally feasible in computer vision. However, in practice, multiple people with arbitrary poses are common in a photo, and recognizing their occupations is even more challenging. We argue that with appropriately built visual attributes, co-occurrence, and spatial configuration model that is learned through structure SVM, we can recognize multiple people's occupations in a photo simultaneously. To evaluate our method's performance, we conduct extensive experiments on a new well-labeled occupation database with 14 representative occupations and over 7K images. Results on this database validate our method's effectiveness and show that occupation recognition is solvable in a more general case.", "Predicting human occupations in photos has great application potentials in intelligent services and systems. However, using traditional classification methods cannot reliably distinguish different occupations due to the complex relations between occupations and the low-level image features. In this paper, we investigate the human occupation prediction problem by modeling the appearances of human clothing as well as surrounding context. The human clothing, regarding its complex details and variant appearances, is described via part-based modeling on the automatically aligned patches of human body parts. The image patches are represented with semantic-level patterns such as clothes and haircut styles using methods based on sparse coding towards informative and noise-tolerant capacities. This description of human clothing is proved to be more effective than traditional methods. Different kinds of surrounding context are also investigated as a complementarity of human clothing features in the cases that the background information is available. Experiments are conducted on a well labeled image database that contains more than 5; 000 images from 20 representative occupation categories. The preliminary study shows the human occupation is reasonably predictable using the proposed clothing features and possible context.", "When people gather for a group photo, they are together for a social reason. Past work has shown that these social relationships affect how people position themselves in a group photograph. We propose classifying the type of group photo based on the spatial arrangement and the predicted attributes of the faces in the image. We propose a matching algorithm for finding images from a training set that have both similar arrangement of faces and attribute correspondence. We formulate the problem as a bipartite matching problem where the faces from each of the pair of images are nodes in the graph. Our work demonstrates that face arrangement, when combined with attribute (age and gender) correspondence, is a useful cue in capturing an approximate social essence of the group of people, and lets us understand why the group of people gathered for the photo.", "Human group, which indicates the people who share similar characteristics, is used to categorize humans into distinct populations or groups. In recent years, with the explosive growth of image, new concepts of human group are blooming in social networks . People in the same human group can be categorized by their facial and clothes appearance characteristics. In this work, we propose an approach to understanding the new concepts of human group with few positive samples. To this end, we construct visual models crossing two modalities related to human images and surrounding texts. Two convolutional neural networks based on face and upper body are constructed separately. Two different convolutional neural networks (CNNs) architectures are explored for visual pre-traing. To assist the human group recognition, we also merge global convolutional feature of the image. The surrounding texts are represented by semantical vectors and utilized as image labels. We transform words in the text into fixed length vectors by the skip-gram model. Then the texts corresponding to each image are converted into one feature vector by sparse coding and max pooling. Given a few positive samples of new concepts of human group, the visual model can be improved to understand the semantical meaning of the new label. The experimental results demonstrate the effectiveness of the proposed visual model and show the excellent learning capacity with few samples.", "", "Iljung S. Kwak1 iskwak@cs.ucsd.edu Ana C. Murillo2 acm@unizar.es Peter N. Belhumeur3 belhumeur@cs.columbia.edu David Kriegman1 kriegman@cs.ucsd.edu Serge Belongie1 sjb@cs.ucsd.edu 1 Dept. of Computer Science and Engineering University of California, San Diego, USA. 2 Dpt. Informatica e Ing. Sistemas Inst. Investigacion en Ingenieria de Aragon. University of Zaragoza, Spain. 3 Department of Computer Science Columbia University, USA." ] }
1704.06361
2952627641
We study the shared processor scheduling problem with a single shared processor where a unit time saving (weight) obtained by processing a job on the shared processor depends on the job. A polynomial-time optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an @math optimization algorithm for a class of instances in which non-decreasing order of jobs with respect to processing times provides a non-increasing order with respect to weights --- this instance generalizes the unweighted case of the problem. This algorithm also leads to a @math -approximation algorithm for the general weighted problem. The complexity of the weighted problem remains open.
We also remark multi-agent scheduling models in which each agent has its own optimality criterion and performs actions aimed at optimizing it. In these models, being examples of decentralized systems, agents usually have a number of non-divisible jobs to execute (depending on the optimization criterion this may be seen as having one divisible job, but restricted by allowing preemptions only at certain specified points). For minimization of weighted total completion time in such models see Lee et. al. @cite_0 and weighted number of tardy jobs see Cheng, Ng and Yuan @cite_2 . Bukchin and Hanany @cite_6 give an example of a game-theoretic analysis to a problem of this type. For overviews and further references on the multi-agent scheduling we refer to the book by Agnetis et. al. @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_8", "@cite_6", "@cite_2" ], "mid": [ "2091376812", "2463480808", "2106447860", "1997032351" ], "abstract": [ "We consider a multi-agent scheduling problem on a single machine in which each agent is responsible for his own set of jobs and wishes to minimize the total weighted completion time of his own set of jobs. It is known that the unweighted problem with two agents is NP-hard in the ordinary sense. For this case, we can reduce our problem to a Multi-Objective Shortest-Path (MOSP) problem and this reduction leads to several results including Fully Polynomial Time Approximation Schemes (FPTAS). We also provide an efficient approximation algorithm with a reasonably good worst-case ratio.", "Scheduling theory has received a growing interest since its origins in the second half of the 20th century. Developed initially for the study of scheduling problems with a single objective, the theory has been recently extended to problems involving multiple criteria. However, this extension has still left a gap between the classical multi-criteria approaches and some real-life problems in which not all jobs contribute to the evaluation of each criterion. In this book, we close this gap by presenting and developing multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. Several scenarios are introduced, depending on the definition and the intersection structure of the job subsets. Complexity results, approximation schemes, heuristics and exact algorithms are discussed for single-machine and parallel-machine scheduling environments. Definitions and algorithms are illustrated with the help of examples and figures.", "Decentralized organizations may incur inefficiencies because of scheduling issues associated with competition among decision makers (DMs) for limited resources. We analyze the decentralization cost (DC), i.e., the ratio between the Nash equilibrium cost and the cost attained at the centralized optimum. Solution properties of a dispatching-sequencing model are derived and subsequently used to develop bounds on the DC for an arbitrary number of jobs and DMs. A scheduling-based coordinating mechanism is then provided, ensuring that the centralized solution is obtained at equilibrium.", "We consider the feasibility model of multi-agent scheduling on a single machine, where each agent's objective function is to minimize the total weighted number of tardy jobs. We show that the problem is strongly NP-complete in general. When the number of agents is fixed, we first show that the problem can be solved in pseudo-polynomial time for integral weights, and can be solved in polynomial time for unit weights; then we present a fully polynomial-time approximation scheme for the problem." ] }
1704.05948
2953209229
A growing number of threats to Android phones creates challenges for malware detection. Manually labeling the samples into benign or different malicious families requires tremendous human efforts, while it is comparably easy and cheap to obtain a large amount of unlabeled APKs from various sources. Moreover, the fast-paced evolution of Android malware continuously generates derivative malware families. These families often contain new signatures, which can escape detection when using static analysis. These practical challenges can also cause traditional supervised machine learning algorithms to degrade in performance. In this paper, we propose a framework that uses model-based semi-supervised (MBSS) classification scheme on the dynamic Android API call logs. The semi-supervised approach efficiently uses the labeled and unlabeled APKs to estimate a finite mixture model of Gaussian distributions via conditional expectation-maximization and efficiently detects malwares during out-of-sample testing. We compare MBSS with the popular malware detection classifiers such as support vector machine (SVM), @math -nearest neighbor (kNN) and linear discriminant analysis (LDA). Under the ideal classification setting, MBSS has competitive performance with 98 accuracy and very low false positive rate for in-sample classification. For out-of-sample testing, the out-of-sample test data exhibit similar behavior of retrieving phone information and sending to the network, compared with in-sample training set. When this similarity is strong, MBSS and SVM with linear kernel maintain 90 detection rate while @math NN and LDA suffer great performance degradation. When this similarity is slightly weaker, all classifiers degrade in performance, but MBSS still performs significantly better than other classifiers.
The intersection of artificial intelligence and statistics provides machine learning the foundation of probabilistic models and data-driven parameter estimation. Machine learning tasks can be categorized into supervised learning, unsupervised learning and semi-supervised learning. In supervised learning, the learning task is to classify the labels of incoming data samples given the observed training data and their labels. Traditional methods include nearest neighbor @cite_7 , support vector machine @cite_3 , decision tree @cite_33 , sparse representation classifier @cite_21 @cite_32 , etc. In unsupervised learning, the learning problem is to group the observations into categories based on a chosen similarity measure. Typical methods include K-means, expectation-maximization @cite_38 , @cite_28 , model-based clustering @cite_15 , @cite_24 , hierarchical clustering @cite_14 , spectral clustering @cite_16 @cite_42 , and so on. Semi-supervised is between supervised and unsupervised learning, where the learning problem makes use of the unlabeled for training and updates the model using both the labeled and unlabeled data. Typical methods include Gaussian mixture models @cite_2 , hidden Markov model, low-density separation @cite_23 , and so on.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_33", "@cite_7", "@cite_28", "@cite_21", "@cite_42", "@cite_32", "@cite_3", "@cite_24", "@cite_23", "@cite_2", "@cite_15", "@cite_16" ], "mid": [ "2049633694", "1998871699", "", "", "", "", "2962744241", "", "2095577883", "2951748024", "", "1579271636", "2011832962", "2165874743" ], "abstract": [ "", "Techniques for partitioning objects into optimally homogeneous groups on the basis of empirical measures of similarity among those objects have received increasing attention in several different fields. This paper develops a useful correspondence between any hierarchical system of such clusters, and a particular type of distance measure. The correspondence gives rise to two methods of clustering that are computationally rapid and invariant under monotonic transformations of the data. In an explicitly defined sense, one method forms clusters that are optimally “connected,” while the other forms clusters that are optimally “compact.”", "", "", "", "", "We present a novel divide-and-conquer bijective graph matching algorithm.The algorithm is fully parallelizable, and scales to match \"big data\" graphs.We demonstrate the effectiveness of the algorithm by matching DTMRI human connectomes. We present a parallelized bijective graph matching algorithm that leverages seeds and is designed to match very large graphs. Our algorithm combines spectral graph embedding with existing state-of-the-art seeded graph matching procedures. We justify our approach by proving that modestly correlated, large stochastic block model random graphs are correctly matched utilizing very few seeds through our divide-and-conquer procedure. We also demonstrate the effectiveness of our approach in matching very large graphs in simulated and real data examples, showing up to a factor of 8 improvement in runtime with minimal sacrifice in accuracy.", "", "In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques.", "Online advertising is an important and huge industry. Having knowledge of the website attributes can contribute greatly to business strategies for ad-targeting, content display, inventory purchase or revenue prediction. Classical inferences on users and sites impose challenge, because the data is voluminous, sparse, high-dimensional and noisy. In this paper, we introduce a stochastic blockmodeling for the website relations induced by the event of online user visitation. We propose two clustering algorithms to discover the instrinsic structures of websites, and compare the performance with a goodness-of-fit method and a deterministic graph partitioning method. We demonstrate the effectiveness of our algorithms on both simulation and AOL website dataset.", "", "The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and ge...", "Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...", "Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems." ] }
1704.05948
2953209229
A growing number of threats to Android phones creates challenges for malware detection. Manually labeling the samples into benign or different malicious families requires tremendous human efforts, while it is comparably easy and cheap to obtain a large amount of unlabeled APKs from various sources. Moreover, the fast-paced evolution of Android malware continuously generates derivative malware families. These families often contain new signatures, which can escape detection when using static analysis. These practical challenges can also cause traditional supervised machine learning algorithms to degrade in performance. In this paper, we propose a framework that uses model-based semi-supervised (MBSS) classification scheme on the dynamic Android API call logs. The semi-supervised approach efficiently uses the labeled and unlabeled APKs to estimate a finite mixture model of Gaussian distributions via conditional expectation-maximization and efficiently detects malwares during out-of-sample testing. We compare MBSS with the popular malware detection classifiers such as support vector machine (SVM), @math -nearest neighbor (kNN) and linear discriminant analysis (LDA). Under the ideal classification setting, MBSS has competitive performance with 98 accuracy and very low false positive rate for in-sample classification. For out-of-sample testing, the out-of-sample test data exhibit similar behavior of retrieving phone information and sending to the network, compared with in-sample training set. When this similarity is strong, MBSS and SVM with linear kernel maintain 90 detection rate while @math NN and LDA suffer great performance degradation. When this similarity is slightly weaker, all classifiers degrade in performance, but MBSS still performs significantly better than other classifiers.
One downside of dynamic analysis is the limited code coverage as some behaviors only exposed under specific conditions such as user interactions or sensor data. IntelliDroid @cite_39 tries to solve this problem using symbolic execution. It leverages a solver to generate targeted input to trigger malicious code logic and increase the coverage. However, its implementation is bound to Android version, and may suffer from the porting issue.
{ "cite_N": [ "@cite_39" ], "mid": [ "2571682498" ], "abstract": [ "We would like to thank Zhen Huang, Mariana D’Angelo, Dhaval Miyani, Wei Huang, Beom Heyn Kim, Sukwon Oh, and Afshar Ganjali for their suggestions and feedback. We also thank the anonymous reviewers for their constructive comments. The research in this paper was supported by an NSERC CGS-M scholarship, a Bell Graduate scholarship, an NSERC Discovery grant, an ORF-RE grant, and a Tier 2 Canada Research Chair." ] }
1704.05838
2952660452
In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
An early inpainting method @cite_5 exploits a diffusion equation to iteratively propagate low-level features from known regions to unknown areas along the mask boundaries. While it performs well on inpainting, it is limited to deal with small and homogeneous regions. Another method has been developed to further improve inpainting results by introducing texture synthesis @cite_2 . @cite_21 , the patch prior is learned to restore images with missing pixels. Recently @cite_12 learn a convolutional network for inpainting. The performance of image completion is significantly improved by an efficient patch matching algorithm @cite_1 for nonparametric texture synthesis. While it performs well when similar patches can be found, it is likely to fail when the source image does not contain sufficient amount of data to fill in the unknown regions. We note this typically occurs in object completion as each part is likely to be unique and no plausible patches for the missing region can be found. Although this problem can be alleviated by using an external database @cite_0 , the ensuing issue is the need to learn high-level representation of one specific object class for patch match.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_12" ], "mid": [ "2172275395", "1993120651", "", "2093212899", "", "2184016288" ], "abstract": [ "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.", "", "An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.", "", "Deep learning has recently been introduced to the field of low-level computer vision and image processing. Promising results have been obtained in a number of tasks including super-resolution, inpainting, deconvolution, filtering, etc. However, previously adopted neural network approaches such as convolutional neural networks and sparse auto-encoders are inherently with translation invariant operators. We found this property prevents the deep learning approaches from outperforming the state-of-the-art if the task itself requires translation variant interpolation (TVI). In this paper, we draw on Shepard interpolation and design Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes end-to-end trainable TVI operators in the network. We show that by adding only a few feature maps in the new Shepard layers, the network is able to achieve stronger results than a much deeper architecture. Superior performance on both image in-painting and super-resolution is obtained where our system outperforms previous ones while keeping the running time competitive." ] }
1704.05838
2952660452
In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
@cite_11 cast image completion as the task for recovering sparse signals from inputs. By solving a sparse linear system, an image can be recovered from some corrupted input. However, this algorithm requires the images to be highly-structured (i.e., data points are assumed to lie in a low-dimensional subspace), e.g., well-aligned face images. In contrast, our algorithm is able to perform object completion without strict constraints.
{ "cite_N": [ "@cite_11" ], "mid": [ "2129812935" ], "abstract": [ "We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims." ] }
1704.05838
2952660452
In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
@cite_3 introduce denoising autoencoders that learn to reconstruct clean signals from corrupted inputs. @cite_29 , demonstrate that an object image can be reconstructed by inverting deep convolutional network features (e.g., VGG @cite_14 ) through a decoder network. @cite_9 propose variational autoencoders (VAEs) which regularize encoders by imposing prior over the latent units such that images can be generated by sampling from or interpolating latent units. However, the generated images by a VAE are usually blurry due to its training objective based on pixel-wise Gaussian likelihood. @cite_4 improve a VAE by adding a discriminator for adversarial training which stems from the generative adversarial networks (GANs) @cite_24 and demonstrate more realistic images can be generated.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_29", "@cite_3", "@cite_24" ], "mid": [ "1686810756", "2202109488", "", "2273348943", "2025768430", "2099471712" ], "abstract": [ "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.", "", "Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1704.05838
2952660452
In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
Closest to this work is the method proposed by @cite_13 which applies an autoencoder and integrates learning visual representations with image completion. However, this approach emphasizes more on unsupervised learning of representations than image completion. In essence, this is a chicken-and-egg problem. Despite the promising results on object detection, it is still not entirely clear if image completion can provide sufficient supervision signals for learning high-level features. On the other hand, semantic labels or segmentations are likely to be useful for improving the completion results, especially on a certain object category. With the goal of achieving high-quality image completion, we propose to use an additional semantic parsing network to regularize the generative networks. Our model deals with severe image corruption (large region with missing pixels), and develops a combined reconstruction, adversarial and parsing loss for face completion.
{ "cite_N": [ "@cite_13" ], "mid": [ "2963420272" ], "abstract": [ "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods." ] }
1704.05973
2952604521
The proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diffusion is known as , which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. Thus, identifying trending rumors demands an efficient yet flexible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classification algorithms to rumor detection in earliness since they rely on hand-crafted features which require intensive manual efforts in the case of large amount of posts. This paper presents a deep attention model on the basis of recurrent neural networks (RNN) to learn temporal hidden representations of sequential posts for identifying rumors. The proposed model delves soft-attention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep attention based RNN model outperforms state-of-the-arts that rely on hand-crafted features; (2) the introduction of soft attention mechanism can effectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.
As a rising technique in NLP (natural language processing) problems @cite_22 @cite_42 @cite_15 , Bahdanau extended the basic encoder-decoder architecture of neural machine translation with attention mechanism to allow the model to automatically search for parts of a source sentence that are relevant to predicting a target word @cite_6 , achieving a comparable performance in the English-to-French translation task. Vinyals improved the attention model in @cite_6 , so their model computed an attention vector reflecting how much attention should be put over the input words and boosted the performance on large scale translation @cite_14 . In addition, Sharma applied a location softmax function @cite_47 to the hidden states of the LSTM (Long Short-Term Memory) layer, thus recognizing more valuable elements in sequential inputs for action recognition. In conclusion, motivated by the successful applications of attention mechanism, we find that attention-based techniques can help better detect rumors with regards to both effectiveness and earliness because they are sensitive to distinctive textual features.
{ "cite_N": [ "@cite_47", "@cite_14", "@cite_22", "@cite_42", "@cite_6", "@cite_15" ], "mid": [ "2172806452", "1869752048", "2118463056", "", "2133564696", "2949888546" ], "abstract": [ "We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.", "Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation.", "While most approaches to automatically recognizing entailment relations have used classifiers employing hand engineered features derived from complex natural language processing pipelines, in practice their performance has been only slightly better than bag-of-word pair classifiers using only lexical similarity. The only attempt so far to build an end-to-end differentiable neural network for entailment failed to outperform such a simple similarity classifier. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.", "", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1704.05776
2953226057
Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are "deep in context". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.
Convolutional neural network approaches with a region proposal stage have recently been very successful in the area of object detection. In the R-CNN paper @cite_8 , selective search @cite_12 was used to generate object proposals, CNN was used to extract and feed features to the classifier. Two acceleration approaches to R-CNN were later proposed. In @cite_19 , RoI pooling was used to efficiently generate features for object proposals. In @cite_5 , the authors used CNN instead of selective search to perform region proposal. Many authors adopted the framework in @cite_5 and proposed a number of variants which performs well in benchmarks consider mAP for high IoU threshold. For instance, in @cite_10 the authors proposed to use scale-dependent pooling and layerwise cascaded rejection classifiers to increase the accuracy and obtained good results. Subcategory information was used in @cite_2 to enhance the region propose stage and achieved promising results in KITTI.
{ "cite_N": [ "@cite_8", "@cite_19", "@cite_2", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2102605133", "", "2950703487", "639708223", "2474389331", "2088049833" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "In this paper, we investigate two new strategies to detect objects accurately and efficiently using deep convolutional neural network: 1) scale-dependent pooling and 2) layerwise cascaded rejection classifiers. The scale-dependent pooling (SDP) improves detection accuracy by exploiting appropriate convolutional features depending on the scale of candidate object proposals. The cascaded rejection classifiers (CRC) effectively utilize convolutional features and eliminate negative object proposals in a cascaded manner, which greatly speeds up the detection while maintaining high accuracy. In combination of the two, our method achieves significantly better accuracy compared to other state-of-the-arts in three challenging datasets, PASCAL object detection challenge, KITTI object detection benchmark and newly collected Inner-city dataset, while being more efficient.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1704.05776
2953226057
Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are "deep in context". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.
One problem with the R-CNN style methods is that in order to process a large number of proposals the computation in the second stage is usually heavy. Various single stage methods which do not rely on region proposals were proposed to accelerate the detection pipeline. SSD @cite_16 is a single stage model in which the feature maps with different resolutions in the feed-forward process were directly used to detect objects with sizes of a specified range. This clever design saved considerable amount of computation and performed much faster than @cite_5 . It achieved good results in datasets for IoU threshold of 0.5. However, we will show in our experiments that the performance drops significantly when we increase the bar for bounding box quality. YOLO @cite_1 is another fast single stage method which generated promising results, however, it's not as accurate as SSD though the customized version is faster. We noticed that fully convolutional two stage methods @cite_21 has been proposed to reduce the computational complexity of the second stage. However, it heavily relies on the bigger and deeper backbone network. The motivation of @cite_11 is similar to ours, but it does not consider contextual information by using recurrent architecture.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2950800384", "", "639708223", "2193145675", "2177544419" ], "abstract": [ "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We propose a novel object localization methodology with the purpose of boosting the localization accuracy of state-of-the-art object detection systems. Our model, given a search region, aims at returning the bounding box of an object of interest inside this region. To accomplish its goal, it relies on assigning conditional probabilities to each row and column of this region, where these probabilities provide useful information regarding the location of the boundaries of the object inside the search region and allow the accurate inference of the object bounding box under a simple probabilistic framework. For implementing our localization model, we make use of a convolutional neural network architecture that is properly adapted for this task, called LocNet. We show experimentally that LocNet achieves a very significant improvement on the mAP for high IoU thresholds on PASCAL VOC2007 test set and that it can be very easily coupled with recent state-of-the-art object detection systems, helping them to boost their performance. Finally, we demonstrate that our detection approach can achieve high detection accuracy even when it is given as input a set of sliding windows, thus proving that it is independent of box proposal methods." ] }
1704.05832
2609986281
We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These are inherently embedded into a memory and time efficient core data structure organized as a Tree of SkipLists. Compared to the well-known Octree representation, our approach exhibits a better time efficiency, thanks to its simple and highly parallelizable computational structure, and a similar memory footprint when mapping large workspaces. Peculiarly within the realm of mapping for robot navigation, our framework supports realtime erosion and re-integration of measurements upon reception of optimized poses from the sensor tracker, so as to improve continuously the accuracy of the map.
A quite popular memory efficient alternative to the 3D occupancy grid is the Octree @cite_10 , whereby the 3D Space is recursively partitioned into octants (octants being equivalent of quadrants in the 3D space) until voxels take the desired resolution. As such, this data structure is a in which each node has exactly 8 children; unlike the 3D grid, though, the Octree avoids modeling the empty space as only leaf and inner nodes associated with occupied space need to be allocated, thereby yielding significant memory savings when representing large environments. Hence, well-known mapping frameworks like Octomap @cite_11 rely on this kind of data structure to build the required workspace representation. In particular, Hornung @cite_11 shape their data structure so that voxels (leafs of the Octree) are the only nodes storing mapping information, all other ones containing references to children only. Therefore, the memory occupancy in bytes can be expressed as:
{ "cite_N": [ "@cite_10", "@cite_11" ], "mid": [ "2074954154", "2133844819" ], "abstract": [ "A geometric modeling technique called Octree Encoding is presented. Arbitrary 3-D objects can be represented to any specified resolution in a hierarchical 8-ary tree structure or “octree” Objects may be concave or convex, have holes (including interior holes), consist of disjoint parts, and possess sculptured (i.e., “free-form”) surfaces. The memory required for representation and manipulation is on the order of the surface area of the object. A complexity metric is proposed based on the number of nodes in an object's tree representation. Efficient (linear time) algorithms have been developed for the Boolean operations (union, intersection and difference), geometric operations (translation, scaling and rotation), N-dimensional interference detection, and display from any point in space with hidden surfaces removed. The algorithms require neither floating-point operations, integer multiplications, nor integer divisions. In addition, many independent sets of very simple calculations are typically generated, allowing implementation over many inexpensive high-bandwidth processors operating in parallel. Real time analysis and manipulation of highly complex situations thus becomes possible.", "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum." ] }
1704.05832
2609986281
We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These are inherently embedded into a memory and time efficient core data structure organized as a Tree of SkipLists. Compared to the well-known Octree representation, our approach exhibits a better time efficiency, thanks to its simple and highly parallelizable computational structure, and a similar memory footprint when mapping large workspaces. Peculiarly within the realm of mapping for robot navigation, our framework supports realtime erosion and re-integration of measurements upon reception of optimized poses from the sensor tracker, so as to improve continuously the accuracy of the map.
However, using an Octree rather than a 3D grid implies a space vs. time trade-off, the memory footprint is reduced at the expense of the query time, the computational complexity of a random voxel access being just @math for a 3D Grid, as large as @math for an Octree ( @math denoting the depth of the tree). This issue has motivated a recent proposal by Labschutz @cite_6 who mix the two approaches into a novel data structure referred to as and fully managed by the GPU.
{ "cite_N": [ "@cite_6" ], "mid": [ "1899193873" ], "abstract": [ "Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures." ] }
1704.05550
2609080309
Due to its promise to alleviate information overload, text summarization has attracted the attention of many researchers. However, it has remained a serious challenge. Here, we first prove empirical limits on the recall (and F1-scores) of extractive summarizers on the DUC datasets under ROUGE evaluation for both the single-document and multi-document summarization tasks. Next we define the concept of compressibility of a document and present a new model of summarization, which generalizes existing models in the literature and integrates several dimensions of the summarization, viz., abstractive versus extractive, single versus multi-document, and syntactic versus semantic. Finally, we examine some new and existing single-document summarization algorithms in a single framework and compare with state of the art summarizers on DUC data.
Single-document extractive summarization. For single-document summarization , @cite_11 explicitly model extraction and compression, but their results showed a wide variation on a subset of 140 documents from the DUC 2002 dataset, and @cite_21 focused on topic coherence with a graphical structure with separate importance, coherence and topic coverage functions. In @cite_21 , the authors present results for single-document summarization on a subset of PLOS Medicine articles and DUC 2002 dataset without mentioning the number of articles used. An algorithm combining syntactic and semantic features was presented by @cite_20 , and graph-based summarization methods in @cite_14 @cite_13 @cite_9 @cite_34 . Several systems were compared against a newly-devised supervised method on a dataset from Yahoo in @cite_16 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21", "@cite_16", "@cite_34", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "1525595230", "2250968833", "2574066488", "202505473", "2110693578", "", "2001135856" ], "abstract": [ "", "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.", "We present an approach for extractive single-document summarization. Our approach is based on a weighted graphical representation of documents obtained by topic modeling. We optimize importance, coherence and non-redundancy simultaneously using ILP. We compare ROUGE scores of our system with state-of-the-art results on scientific articles from PLOS Medicine and on DUC 2002 data. Human judges evaluate the coherence of summaries generated by our system in comparision to two baselines. Our approach obtains competitive performance.", "", "Summarization mainly provides the major topics or theme of document in limited number of words. However, in extract summary we depend upon extracted sentences, while in abstract summary, each summary sentence may contain concise information from multiple sentences. The major facts which affect the quality of summary are: (1) the way of handling noisy or less important terms in document, (2) utilizing information content of terms in document (as, each term may have different levels of importance in document) and (3) finally, the way to identify the appropriate thematic facts in the form of summary. To reduce the effect of noisy terms and to utilize the information content of terms in the document, we introduce the graph theoretical model populated with semantic and statistical importance of terms. Next, we introduce the concept of weighted minimum vertex cover which helps us in identifying the most representative and thematic facts in the document. Additionally, to generate abstract summary, we introduce the use of vertex constrained shortest path based technique, which uses minimum vertex cover related information as valuable resource. Our experimental results on DUC-2001 and DUC-2002 dataset show that our devised system performs better than baseline systems.", "We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.", "", "Text summarization is one of the oldest problems in natural language processing. Popular approaches rely on extracting relevant sentences from the original documents. As a side effect, sentences that are too long but partly relevant are doomed to either not appear in the final summary, or prevent inclusion of other relevant sentences. Sentence compression is a recent framework that aims to select the shortest subsequence of words that yields an informative and grammatical sentence. This work proposes a one-step approach for document summarization that jointly performs sentence extraction and compression by solving an integer linear program. We report favorable experimental results on newswire data." ] }
1704.05550
2609080309
Due to its promise to alleviate information overload, text summarization has attracted the attention of many researchers. However, it has remained a serious challenge. Here, we first prove empirical limits on the recall (and F1-scores) of extractive summarizers on the DUC datasets under ROUGE evaluation for both the single-document and multi-document summarization tasks. Next we define the concept of compressibility of a document and present a new model of summarization, which generalizes existing models in the literature and integrates several dimensions of the summarization, viz., abstractive versus extractive, single versus multi-document, and syntactic versus semantic. Finally, we examine some new and existing single-document summarization algorithms in a single framework and compare with state of the art summarizers on DUC data.
Multi-document extractive summarization. For multi-document summarization , extraction and redundancy compression of sentences have been modeled by integer linear programming and approximation algorithms @cite_1 @cite_17 @cite_29 @cite_24 @cite_2 @cite_3 @cite_18 . Supervised and semi-supervised learning based extractive summarization was studied in @cite_30 . Of course, single-document summarization can be considered as a special case, but no experimental results are presented for this important special case in the papers cited in this paragraph.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_1", "@cite_3", "@cite_24", "@cite_2", "@cite_17" ], "mid": [ "", "2250483006", "2122311631", "2152992673", "1696101414", "2252239149", "2251750971", "2150869743" ], "abstract": [ "", "The most successful approaches to extractive text summarization seek to maximize bigram coverage subject to a budget constraint. In this work, we propose instead to maximize semantic volume. We embed each sentence in a semantic space and construct a summary by choosing a subset of sentences whose convex hull maximizes volume in that space. We provide a greedy algorithm based on the GramSchmidt process to efficiently perform volume maximization. Our method outperforms the state-of-the-art summarization approaches on benchmark datasets.", "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a margin-based objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.", "In this work we study the theoretical and empirical properties of various global inference algorithms for multi-document summarization. We start by defining a general framework for inference in summarization. We then present three algorithms: The first is a greedy approximate method, the second a dynamic programming approach based on solutions to the knapsack problem, and the third is an exact algorithm that uses an Integer Linear Programming formulation of the problem. We empirically evaluate all three algorithms and show that, relative to the exact solution, the dynamic programming algorithm provides near optimal results with preferable scaling properties.", "In concept-based summarization, sentence selection is modelled as a budgeted maximum coverage problem. As this problem is NP-hard, pruning low-weight concepts is required for the solver to find optimal solutions efficiently. This work shows that reducing the number of concepts in the model leads to lower Rouge scores, and more importantly to the presence of multiple optimal solutions. We address these issues by extending the model to provide a single optimal solution, and eliminate the need for concept pruning using an approximation algorithm that achieves comparable performance to exact inference.", "We present a dual decomposition framework for multi-document summarization, using a model that jointly extracts and compresses sentences. Compared with previous work based on integer linear programming, our approach does not require external solvers, is significantly faster, and is modular in the three qualities a summary should have: conciseness, informativeness, and grammaticality. In addition, we propose a multi-task learning framework to take advantage of existing data for extractive summarization and sentence compression. Experiments in the TAC2008 dataset yield the highest published ROUGE scores to date, with runtimes that rival those of extractive summarizers.", "In this paper, we focus on the problem of using sentence compression techniques to improve multi-document summarization. We propose an innovative sentence compression method by considering every node in the constituent parse tree and deciding its status – remove or retain. Integer liner programming with discriminative training is used to solve the problem. Under this model, we incorporate various constraints to improve the linguistic quality of the compressed sentences. Then we utilize a pipeline summarization framework where sentences are first compressed by our proposed compression model to obtain top-n candidates and then a sentence selection module is used to generate the final summary. Compared with state-ofthe-art algorithms, our model has similar ROUGE-2 scores but better linguistic quality on TAC data.", "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the sub-sentence or \"concept-level, to a sentence-level model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics." ] }
1704.05550
2609080309
Due to its promise to alleviate information overload, text summarization has attracted the attention of many researchers. However, it has remained a serious challenge. Here, we first prove empirical limits on the recall (and F1-scores) of extractive summarizers on the DUC datasets under ROUGE evaluation for both the single-document and multi-document summarization tasks. Next we define the concept of compressibility of a document and present a new model of summarization, which generalizes existing models in the literature and integrates several dimensions of the summarization, viz., abstractive versus extractive, single versus multi-document, and syntactic versus semantic. Finally, we examine some new and existing single-document summarization algorithms in a single framework and compare with state of the art summarizers on DUC data.
Metrics and Evaluation. Of course, ROUGE is not the only metric for evaluating summaries. Human evaluators were used at NIST for scoring summaries on seven different metrics such as linguistic quality, etc. There is also the Pyramid approach @cite_32 and BE @cite_10 , for example. Our choice of ROUGE is based on its popularity, ease of use, and correlation with human assessment @cite_6 . Our choice of ROUGE configurations includes the one that was found to be best according to the paper @cite_33 .
{ "cite_N": [ "@cite_33", "@cite_10", "@cite_32", "@cite_6" ], "mid": [ "2251023345", "2397230076", "2251607282", "" ], "abstract": [ "We provide an analysis of current evaluation methodologies applied to summarization metrics and identify the following areas of concern: (1) movement away from evaluation by correlation with human assessment; (2) omission of important components of human assessment from evaluations, in addition to large numbers of metric variants; (3) absence of methods of significance testing improvements over a baseline. We outline an evaluation methodology that overcomes all such challenges, providing the first method of significance testing suitable for evaluation of summarization metrics. Our evaluation reveals for the first time which metric variants significantly outperform others, optimal metric variants distinct from current recommended best variants, as well as machine translation metric BLEU to have performance on-par with ROUGE for the purpose of evaluation of summarization systems. We subsequently replicate a recent large-scale evaluation that relied on, what we now know to be, suboptimal ROUGE variants revealing distinct conclusions about the relative performance of state-of-the-art summarization systems.", "This paper describes BEwT-E (Basic Elements with Transformations for Evaluation), an automatic system for summarization evaluation. BEwT-E is a new, more sophisticated implementation of the BE framework that uses transformations to match BEs (minimal-length syntactically well-formed units) that are lexically different yet semantically similar. We demonstrate the effectiveness of BEwT-E using DUC and TAC datasets.", "The pyramid method for content evaluation of automated summarizers produces scores that are shown to correlate well with manual scores used in educational assessment of students’ summaries. This motivates the development of a more accurate automated method to compute pyramid scores. Of three methods tested here, the one that performs best relies on latent semantics.", "" ] }
1704.05572
2949270275
While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.
There exist several systems for retrieval-based Web QA problems @cite_20 @cite_10 . While structured KBs such as Freebase have been used in many @cite_8 @cite_18 @cite_16 , such approaches are limited by the coverage of the data. QA systems using semi-structured Open IE tuples @cite_3 @cite_2 @cite_13 or automatically extracted web tables @cite_19 @cite_7 have broader coverage but are limited to simple questions with a single query.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_7", "@cite_8", "@cite_3", "@cite_19", "@cite_2", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "", "2250225488", "2101964891", "2252136820", "2131726681", "", "2090243146", "2251158365", "2115758952", "2171278097" ], "abstract": [ "", "A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets.", "Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. While existing work trades off one aspect for another, this paper simultaneously makes progress on both fronts through a new task: answering complex questions on semi-structured tables using question-answer pairs as supervision. The central challenge arises from two compounding factors: the broader domain results in an open-ended set of relations, and the deeper compositionality results in a combinatorial explosion in the space of logical forms. We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. For evaluation, we created a new dataset of 22,033 complex questions on Wikipedia tables, which is made publicly available.", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8 loss in precision.", "", "We consider the problem of open-domain question answering (Open QA) over massive knowledge bases (KBs). Existing approaches use either manually curated KBs like Freebase or KBs automatically extracted from unstructured text. In this paper, we present OQA, the first approach to leverage both curated and extracted KBs. A key technical challenge is designing systems that are robust to the high variability in both natural language questions and massive KBs. OQA achieves robustness by decomposing the full Open QA problem into smaller sub-problems including question paraphrasing and query reformulation. OQA solves these sub-problems by mining millions of rules from an unlabeled question corpus and across multiple KBs. OQA then learns to integrate these rules by performing discriminative training on question-answer pairs using a latent-variable structured perceptron algorithm. We evaluate OQA on three benchmark question sets and demonstrate that it achieves up to twice the precision and recall of a state-of-the-art Open QA system.", "Concerned about the Turing test's ability to correctly evaluate if a system exhibits human-like intelligence, the Winograd Schema Challenge (WSC) has been proposed as an alternative. A Winograd Schema consists of a sentence and a question. The answers to the questions are intuitive for humans but are designed to be difficult for machines, as they require various forms of commonsense knowledge about the sentence. In this paper we demonstrate our progress towards addressing the WSC. We present an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with them to come up with the answer. In the process we develop a semantic parser (www.kparser.org). We show that our approach works well with respect to a subset of Winograd schemas.", "We describe the architecture of the AskMSR question answering system and systematically evaluate contributions of different system components to accuracy. The system differs from most question answering systems in its dependency on data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers. Because a wrong answer is often worse than no answer, we also explore strategies for predicting when the question answering system is likely to give an incorrect answer.", "IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA." ] }
1704.05572
2949270275
While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.
Elementary-level science QA tasks require reasoning to handle complex questions. Markov Logic Networks @cite_4 have been used to perform probabilistic reasoning over a small set of logical rules @cite_1 . Simple IR techniques have also been proposed for science tests @cite_15 and Gaokao tests (equivalent to the SAT exam in China) @cite_14 .
{ "cite_N": [ "@cite_1", "@cite_15", "@cite_14", "@cite_4" ], "mid": [ "2250322043", "2547185913", "2572223286", "" ], "abstract": [ "Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. We develop a system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic. Automatic extraction, while scalable, often results in knowledge that is incomplete and noisy, motivating use of reasoning mechanisms that handle uncertainty. Markov Logic Networks (MLNs) seem a natural model for expressing such knowledge, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Our third approach, called Praline, uses MLNs to align lexical elements as well as define and control how inference should be performed in this task. Praline demonstrates a 15 accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches.", "What capabilities are required for an AI system to pass standard 4th Grade Science Tests? Previous work has examined the use of Markov Logic Networks (MLNs) to represent the requisite background knowledge and interpret test questions, but did not improve upon an information retrieval (IR) baseline. In this paper, we describe an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results. We evaluate the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam (using only non-diagram, multiple choice questions), and show that our overall system's score is 71.3 , an improvement of 23.8 (absolute) over the MLN-based method described in previous work. We conclude with a detailed analysis, illustrating the complementary strengths of each method in the ensemble. Our datasets are being released to enable further research.", "Answering questions in a university's entrance examination like Gaokao in China challenges AI technology. As a preliminary attempt to take up this challenge, we focus on multiple-choice questions in Gaokao, and propose a three-stage approach that exploits and extends information retrieval techniques. Taking Wikipedia as the source of knowledge, our approach obtains knowledge relevant to a question by retrieving pages from Wikipedia via string matching and context-based disambiguation, and then ranks and filters pages using multiple strategies to draw critical evidence, based on which the truth of each option is assessed via relevance-based entailment. It achieves encouraging results on real-life questions in recent history tests, significantly outperforming baseline approaches.", "" ] }
1704.05708
2952424290
This paper presents an end-to-end deep learning framework using passive WiFi sensing to classify and estimate human respiration activity. A passive radar test-bed is used with two channels where the first channel provides the reference WiFi signal, whereas the other channel provides a surveillance signal that contains reflections from the human target. Adaptive filtering is performed to make the surveillance signal source-data invariant by eliminating the echoes of the direct transmitted signal. We propose a novel convolutional neural network to classify the complex time series data and determine if it corresponds to a breathing activity, followed by a random forest estimator to determine breathing rate. We collect an extensive dataset to train the learning models and develop reference benchmarks for the future studies in the field. Based on the results, we conclude that deep learning techniques coupled with passive radars offer great potential for end-to-end human activity recognition.
Deep learning techniques using passive WiFi sensing are mostly unexplored. Yang proposed a deep CNN on multichannel time series for human activity recongition using the on-body sensors @cite_9 . With the rise of deep learning techniques in the computer vision domain @cite_18 @cite_15 , various radio applications have also begun to capture interest of deep learning researchers. O'shea proposed CNNs for signal modulation recongition @cite_4 . Other researchers have targeted applications like solar radio burst classification @cite_8 , and CSI-based fingerprinting for indoor localization @cite_14 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_15" ], "mid": [ "1922126009", "", "2951759938", "", "2288074780", "2962851944" ], "abstract": [ "In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.", "", "We study the adaptation of convolutional neural networks to the complex temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert features which are widely used in the field today and we show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task especially at low signal to noise ratio.", "", "This paper focuses on human activity recognition (HAR) problem, in which inputs are multichannel time series signals acquired from a set of bodyworn inertial sensors and outputs are predefined human activities. In this problem, extracting effective features for identifying activities is a critical but challenging task. Most existing work relies on heuristic hand-crafted feature design and shallow feature learning architectures, which cannot find those distinguishing features to accurately classify different activities. In this paper, we propose a systematic feature learning method for HAR problem. This method adopts a deep convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way. Through the deep architecture, the learned features are deemed as the higher level abstract representation of low level raw time series signals. By leveraging the labelled information via supervised learning, the learned features are endowed with more discriminative power. Unified in one model, feature learning and classification are mutually enhanced. All these unique advantages of the CNN make it outperform other HAR algorithms, as verified in the experiments on the Opportunity Activity Recognition Challenge and other benchmark datasets.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]." ] }
1704.05239
2609075913
Optical flow estimation in the rainy scenes is challenging due to background degradation introduced by rain streaks and rain accumulation effects in the scene. Rain accumulation effect refers to poor visibility of remote objects due to the intense rainfall. Most existing optical flow methods are erroneous when applied to rain sequences because the conventional brightness constancy constraint (BCC) and gradient constancy constraint (GCC) generally break down in this situation. Based on the observation that the RGB color channels receive raindrop radiance equally, we introduce a residue channel as a new data constraint to reduce the effect of rain streaks. To handle rain accumulation, our method decomposes the image into a piecewise-smooth background layer and a high-frequency detail layer. It also enforces the BCC on the background layer only. Results on both synthetic dataset and real images show that our algorithm outperforms existing methods on different types of rain sequences. To our knowledge, this is the first optical flow method specifically dealing with rain.
One of the popular practices in optical flow estimation is to perform some kind of layer separation. 's @cite_26 is the first work to introduce structure-texture decomposition denoising @cite_1 into the computation of optical flow. The purpose is to remove shadow and shading from the texture layer. However, for rainy scenes, high frequency rain streaks will appear in the texture layer and compromise the utility of the texture layer for flow estimation. Recently 's @cite_9 proposes a double-layer decomposition framework to handle transparency or reflection, based on the assumption that both layers obey sparse image gradient distributions. This method cannot be used to remove the rain layer since the rain streaks result in a lot of gradients.
{ "cite_N": [ "@cite_9", "@cite_26", "@cite_1" ], "mid": [ "", "1697939221", "2103559027" ], "abstract": [ "", "Virtually all variational methods for motion estimation regularize the gradient of the flow field, which introduces a bias towards piecewise constant motions in weakly textured areas. We propose a novel regularization approach, based on decorrelated second-order derivatives, that does not suffer from this shortcoming. We then derive an efficient numerical scheme to solve the new model using projected gradient descent. A comparison to a TV regularized model shows that the proposed second-order prior exhibits superior performance, in particular in low-textured areas (where the prior becomes important). Finally, we show that the proposed model yields state-of-the-art results on the Middlebury optical flow database.", "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t--- 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set." ] }
1704.05239
2609075913
Optical flow estimation in the rainy scenes is challenging due to background degradation introduced by rain streaks and rain accumulation effects in the scene. Rain accumulation effect refers to poor visibility of remote objects due to the intense rainfall. Most existing optical flow methods are erroneous when applied to rain sequences because the conventional brightness constancy constraint (BCC) and gradient constancy constraint (GCC) generally break down in this situation. Based on the observation that the RGB color channels receive raindrop radiance equally, we introduce a residue channel as a new data constraint to reduce the effect of rain streaks. To handle rain accumulation, our method decomposes the image into a piecewise-smooth background layer and a high-frequency detail layer. It also enforces the BCC on the background layer only. Results on both synthetic dataset and real images show that our algorithm outperforms existing methods on different types of rain sequences. To our knowledge, this is the first optical flow method specifically dealing with rain.
Mileva et.al's @cite_19 proposes an illumination-robust variational method using color space transformation to handle shadow and highlights. Unfortunately, the HSV colour space and (r ) color space approaches do not result in measures that are invariant under the effects of rain streaks and hence cannot be directly applied to rainy scenes.
{ "cite_N": [ "@cite_19" ], "mid": [ "1590329763" ], "abstract": [ "Since years variational methods belong to the most accurate techniques for computing the optical flow in image sequences. However, if based on the grey value constancy assumption only, such techniques are not robust enough to cope with typical illumination changes in real-world data. In our paper we tackle this problem in two ways: First we discuss different photometric invariants for the design of illumination-robust variational optical flow methods. These invariants are based on colour information and include such concepts as spherical conical transforms, normalisation strategies and the differentiation of logarithms. Secondly, we embed them into a suitable multichannel generalisation of the highly accurate variational optical flow technique of This in turn allows us to access the true potential of such invariants for estimating the optical flow. Experiments with synthetic and real-world data demonstrate the success of combining accuracy and robustness: Even under strongly varying illumination, reliable and precise results are obtained." ] }
1704.05239
2609075913
Optical flow estimation in the rainy scenes is challenging due to background degradation introduced by rain streaks and rain accumulation effects in the scene. Rain accumulation effect refers to poor visibility of remote objects due to the intense rainfall. Most existing optical flow methods are erroneous when applied to rain sequences because the conventional brightness constancy constraint (BCC) and gradient constancy constraint (GCC) generally break down in this situation. Based on the observation that the RGB color channels receive raindrop radiance equally, we introduce a residue channel as a new data constraint to reduce the effect of rain streaks. To handle rain accumulation, our method decomposes the image into a piecewise-smooth background layer and a high-frequency detail layer. It also enforces the BCC on the background layer only. Results on both synthetic dataset and real images show that our algorithm outperforms existing methods on different types of rain sequences. To our knowledge, this is the first optical flow method specifically dealing with rain.
A number of single-image rain streaks removal methods have been proposed @cite_8 @cite_17 @cite_2 . 's @cite_8 decomposes an input image into low frequency (rain streak free) and high frequency components, and subsequently extracted geometric details from the high frequency component to recover the de-rained image. 's @cite_17 , decomposes the rain image into a rain-free background image layer and a rain streak layer and solves this formulation by introducing GMM priors of the background and rain streaks. 's @cite_2 incorporates the convolutional neural network to learn the binary rain region features and rain streak intensity features. In the output image of these deraining methods @cite_8 @cite_17 @cite_2 , the rain streak regions are blurred, and the background geometric details can be lost. Hence, the derained sequences of both approaches violate both the BCC and the GCC. Our experiments show that existing optical flow method with a state-of-the-art deraining pre-processing step does not work properly due to the artifacts introduced by the deraining algorithms.
{ "cite_N": [ "@cite_2", "@cite_17", "@cite_8" ], "mid": [ "2525037006", "2466666260", "2121396509" ], "abstract": [ "In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain accumulation. Our core ideas lie in our new rain image models and a novel deep learning architecture. We first modify the commonly used model, which is a linear combination of a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a model consisting of a component representing rain accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which normally happen in heavy rain. Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. In many cases though, rain streaks can be dense and large in their size, thus to obtain the clean background, we need spatial contextual information. For this, we utilize the dilated convolution. To handle rain accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose an iterative information feedback (IIF) network that removes rain streaks and clears up the rain accumulation iteratively and progressively. Overall, this multi-task learning and iterative information feedback benefits each other and constitutes a network that is end-to-end trainable. Our extensive evaluation on real images, particularly on heavy rain, shows the effectiveness of our novel models and architecture, outperforming the state-of-the-art methods significantly.", "This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples.", "Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a “rain component” and a “nonrain component” by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm." ] }
1704.05416
2608966276
We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.
Blind motion deblurring, removing the motion blur given just a noisy blurred image, is a very challenging problem that has been extensively studied (see @cite_11 for a recent review and comparison of various algorithms). Representative methods for single image blind deblurring include the variational Bayes approaches of Fergus al @cite_17 and Levin al @cite_29 , and algorithms using novel image priors such as normalized sparsity @cite_34 , an evolving approximation to the @math norm @cite_25 , and @math norms on both image gradients and intensities @cite_1 .
{ "cite_N": [ "@cite_11", "@cite_29", "@cite_1", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2465552163", "2036682493", "2047123483", "1987075379", "2167307343", "" ], "abstract": [ "Numerous single image blind deblurring algorithms have been proposed to restore latent sharp images under camera motion. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real blurred images. It is thus unclear how these algorithms would perform on images acquired \"in the wild\" and how we could gauge the progress in the field. In this paper, we aim to bridge this gap. We present the first comprehensive perceptual study and analysis of single image blind deblurring using real-world blurred images. First, we collect a dataset of real blurred images and a dataset of synthetically blurred images. Using these datasets, we conduct a large-scale user study to quantify the performance of several representative state-of-the-art blind deblurring algorithms. Second, we systematically analyze subject preferences, including the level of agreement, significance tests of score differences, and rationales for preferring one method over another. Third, we study the correlation between human subjective scores and several full-reference and noreference image quality metrics. Our evaluation and analysis indicate the performance gap between synthetically blurred images and real blurred image and sheds light on future research in single image blind deblurring.", "In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k ) and not only its mode. This leads to a distinction between MAP x, k strategies which estimate the mode pair x, k and often lead to undesired results, and MAP k strategies which select the best k while marginalizing over all possible x images. The MAP k principle is significantly more robust than the MAP x, k one, yet, it involves a challenging marginalization over latent images. As a result, MAP k techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAP k algorithm which involves only a modest modification of common MAP x, k algorithms. We show that MAP k can, in fact, be optimized easily, with no additional computational complexity.", "We propose a simple yet effective 0-regularized prior based on intensity and gradient for text image deblurring. The proposed image prior is motivated by observing distinct properties of text images. Based on this prior, we develop an efficient optimization method to generate reliable intermediate results for kernel estimation. The proposed method does not require any complex filtering strategies to select salient edges which are critical to the state-of-the-art deblurring algorithms. We discuss the relationship with other deblurring algorithms based on edge selection and provide insight on how to select salient edges in a more principled way. In the final latent image restoration step, we develop a simple method to remove artifacts and render better deblurred images. Experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art text image deblurring methods. In addition, we show that the proposed method can be effectively applied to deblur low-illumination images.", "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.", "We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unified framework for both uniform and non-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.", "" ] }
1704.05416
2608966276
We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.
Concurrently with our work, Dansereau al @cite_2 introduced a non-blind algorithm to deblur light fields captured with known camera motion.
{ "cite_N": [ "@cite_2" ], "mid": [ "2444242407" ], "abstract": [ "We generalize Richardson-Lucy deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation. We include a novel regularization term that maintains parallax information in the light field, and employ 4-D anisotropic total variation to reduce noise and ringing. We demonstrate the method operating effectively on rendered scenes and scenes captured using an off-the-shelf light field camera mounted on an industrial robot arm. Examples include complex 3-D geometry and cover all major classes of camera motion. Both qualitative and quantitative results confirm the effectiveness of the method over a range of conditions, including commonly occurring cases for which previously published methods fail. We include mathematical proof that the algorithm converges to the maximum-likelihood estimate of the unblurred scene under Poisson noise." ] }
1704.05125
2606570409
In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network performance in terms of both the coverage probability and the area spectral efficiency (ASE) will continuously decrease toward zero as the BS density increases for ultra-dense (UD) small cell networks (SCNs). Such findings are completely different from the conclusions in existing works, both quantitatively and qualitatively. In particular, this performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th-generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even less network capacity. Our study results reveal that one way to address this issue is to lower the SCN BS antenna height to the UE antenna height. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.
Another research area relating to the antenna height issue is that of unmanned aerial vehicles (UAVs), which has attracted significant attention as key enablers for rapid network deployment, where the antenna heights of drone BSs and ground UEs are usually considered @cite_9 @cite_22 @cite_6 @cite_17 . Generally speaking, the works on drone BSs put a lot of emphasis on the 3D mobility of UAVs and try to numerically find the optimal position height for the drone deployment in a small area involving just one or a few flying BSs. In contrast, our work considers a large-scale, randomly-deployed and stationary cellular network, paying special attention to the capacity scaling law of the whole network.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_6", "@cite_17" ], "mid": [ "", "2039409843", "2963639181", "2226130968" ], "abstract": [ "", "We consider a collection of single-antenna ground nodes communicating with a multi-antenna unmanned aerial vehicle (UAV) over a multiple-access ground-to-air communications link. The UAV uses beamforming to mitigate inter-user interference and achieve spatial division multiple access (SDMA). First, we consider a simple scenario with two static ground nodes and analytically investigate the effect of the UAV's heading on the system sum rate. We then study a more general setting with multiple mobile ground-based terminals, and develop an algorithm for dynamically adjusting the UAV heading to maximize the approximate ergodic sum rate of the uplink channel, using a prediction filter to track the positions of the mobile ground nodes. For the common scenario where a strong line-of-sight (LOS) channel exists between the ground nodes and UAV, we use an asymptotic analysis to find simplified versions of the algorithm for low and high SNR. We present simulation results that demonstrate the benefits of adapting the UAV heading in order to optimize the uplink communications performance. The simulation results also show that the simplified algorithms provide near-optimal performance.", "Thanks to the recent advancements in drone technology, it has become viable and cost-effective to quickly deploy small cells in areas of urgent needs by using a drone as a cellular base station. In this paper, we explore the benefit of dynamically repositioning the drone base station in the air to reduce the distance between the BS and the mobile user equipment, thereby improving the spectral efficiency of the small cell. In particular, we propose algorithms to autonomously control the repositioning of the drone in response to users activities and movements. We demonstrate that, compared to a drone hovering at a fixed location, dynamic repositioning of the drone moving with high speed can increase spectral efficiency by 15 . However, considering the tradeoff between spectral efficiency and energy efficiency of the drone, we show that 10.5 spectral efficiency gain can be obtained without negatively affecting energy consumption of the drone.", "The use of drone small cells (DSCs) which are aerial wireless base stations that can be mounted on flying devices such as unmanned aerial vehicles (UAVs), is emerging as an effective technique for providing wireless services to ground users in a variety of scenarios. The efficient deployment of such DSCs while optimizing the covered area is one of the key design challenges. In this paper, considering the low altitude platform (LAP), the downlink coverage performance of DSCs is investigated. The optimal DSC altitude which leads to a maximum ground coverage and minimum required transmit power for a single DSC is derived. Furthermore, the problem of providing a maximum coverage for a certain geographical area using two DSCs is investigated in two scenarios; interference free and full interference between DSCs. The impact of the distance between DSCs on the coverage area is studied and the optimal distance between DSCs resulting in maximum coverage is derived. Numerical results verify our analytical results on the existence of optimal DSCs altitude separation distance and provide insights on the optimal deployment of DSCs to supplement wireless network coverage." ] }
1704.05188
2951270658
Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.
Actually, the majority of existing methods formulate WSL as an MIL problem. Given weak image-level supervisory information, these methods typically alternate between learning a discriminative representation of the object and selecting the positive object samples in positive images based on this representation. However, this results in a non-convex optimization problem, so these methods are prone to being trapped in local optima, and their solutions are sensitive to the initial positive samples. Many efforts have been made to address the above issue. Deselaers @cite_31 initialized object locations using the objectness method @cite_0 . Siva @cite_5 selected positive samples by maximizing the distances between the positive samples and those in negative images. Bilen @cite_25 proposed a smoothed version of MIL that softly labels object proposals instead of choosing the highest scoring ones. Song @cite_22 proposed a graph-based method to initialize the object locations by solving a submodular cover problem. Wang @cite_15 proposed a latent semantic clustering method to select the most discriminative cluster for each class based on Probability Latent Semantic Analysis (pLSA).
{ "cite_N": [ "@cite_22", "@cite_15", "@cite_0", "@cite_5", "@cite_31", "@cite_25" ], "mid": [ "2952072685", "", "2066624635", "1575299770", "1952794764", "" ], "abstract": [ "Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.", "", "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "We propose a novel approach to annotating weakly labelled data. In contrast to many existing approaches that perform annotation by seeking clusters of self-similar exemplars (minimising intra-class variance), we perform image annotation by selecting exemplars that have never occurred before in the much larger, and strongly annotated, negative training set (maximising inter-class variance). Compared to existing methods, our approach is fast, robust, and obtains state of the art results on two challenging data-sets --- voc2007 (all poses), and the msr2 action data-set, where we obtain a 10 increase. Moreover, this use of negative mining complements existing methods, that seek to minimize the intra-class variance, and can be readily integrated with many of them.", "Learning a new object class from cluttered training images is very challenging when the location of object instances is unknown. Previous works generally require objects covering a large portion of the images. We present a novel approach that can cope with extensive clutter as well as large scale and appearance variations between object instances. To make this possible we propose a conditional random field that starts from generic knowledge and then progressively adapts to the new class. Our approach simultaneously localizes object instances while learning an appearance model specific for the class. We demonstrate this on the challenging PASCAL VOC 2007 dataset. Furthermore, our method enables to train any state-of-the-art object detector in a weakly supervised fashion, although it would normally require object location annotations.", "" ] }
1704.05188
2951270658
Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.
Perhaps @cite_14 is the closest work to ours. @cite_14 first trains a whole-image multi-label classification network and then selects confident class-specific proposals with a mask-out strategy and MIL. Finally, a Fast R-CNN detector is trained on these proposals. However, the whole-image classification in @cite_14 may not provide suitable features for object localization which requires tight spatial coverage of the whole object instance. Additionally, SVM is used in MIL in @cite_14 , which has the inferior discriminating ability to the regional CNN detector. In contrast, our approach overcomes this weakness by performing image-to-object transferring during multi-label image classification and online supportive sample harvesting in regional CNN detector learning.
{ "cite_N": [ "@cite_14" ], "mid": [ "2441255125" ], "abstract": [ "We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods." ] }