aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1708.00153 | 2741868055 | Being intensively studied, visual tracking has seen great recent advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing from the success of parallel tracking and mapping in visual SLAM. Our PTAV framework typically consists of two components, a tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V checks the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. In our extensive experiments on popular benchmarks including OTB2013, OTB2015, TC128 and UAV20L, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact performs even better than many deep learning based solutions. Moreover, as a general framework, PTAV is very flexible and has great rooms for improvement and generalization. | Related tracking algorithms. Existing model free visual tracking algorithms are often categorized as either discriminative or generative. Discriminative algorithms usually treat tracking as a classification problem that distinguishes the target from ever-changing background. The classifiers in these methods are learned by, , multiple instance learning (MIL) @cite_11 , compressive sensing @cite_41 , P-N learning @cite_37 , structured output SVMs @cite_18 , on-line boosting @cite_30 and so on. By contrast, generative trackers usually formulate tracking as searching for regions most similar to the target. To this end, various object appearance modeling approaches have been proposed such as incremental subspace learning @cite_26 and sparse representation @cite_34 @cite_8 @cite_42 . Inspired by the powerfulness of deep features in visual recognition @cite_1 @cite_14 , some trackers @cite_6 @cite_5 @cite_33 @cite_4 utilize the deep features for robust object appearance modeling. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_4",
"@cite_42",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_34",
"@cite_11"
],
"mid": [
"1807914171",
"",
"2343187456",
"2139047213",
"1686810756",
"",
"2060814785",
"2165037244",
"2118097920",
"2342873618",
"",
"2214352687",
"1857884451",
"2183648259",
"2109579504"
],
"abstract": [
"Recently, on-line adaptation of binary classifiers for tracking have been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminiated. However, on-line adaption faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods.",
"",
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.",
"Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an l 1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to l 1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new l 1 norm related minimization model is proposed to improve the tracking accuracy by adding an l 1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting l 1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers.",
"It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. While much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, these mis-aligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis. Our appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is adopted to efficiently extract the features for the appearance model. We compress samples of foreground targets and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness.",
"In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"Dictionary learning for sparse representation has been increasingly applied to object tracking, however, the existing methods only utilize one modality of the object to learn a single dictionary. In this paper, we propose a robust tracking method based on multitask joint dictionary learning. Through extracting different features of the target, multiple linear sparse representations are obtained. Each sparse representation can be learned by a corresponding dictionary. Instead of separately learning the multiple dictionaries, we adopt a multitask learning approach to learn the multiple linear sparse representations, which provide additional useful information to the classification problem. Because different tasks may favor different sparse representation coefficients, yet the joint sparsity may enforce the robustness in coefficient estimation. During tracking, a classifier is constructed based on a joint linear representation, and the candidate with the smallest joint decision error is selected to be the tracked object. In addition, reliable tracking results and augmented training samples are accumulated into two sets to update the dictionaries for classification, which helps our tracker adapt to the fast time-varying object appearance. Both qualitative and quantitative evaluations on CVPR2013 visual tracking benchmark demonstrate that our method performs favorably against state-of-the-art trackers.",
"",
"Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.",
"We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.",
"In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an � 1-regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers.",
"In this paper, we address the problem of tracking an object in a video given its location in the first frame and no other information. Recently, a class of tracking techniques called “tracking by detection” has been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrade the classifier and can cause drift. In this paper, we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks. We propose a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. We present thorough experimental results (both qualitative and quantitative) on a number of challenging video clips."
]
} |
1708.00153 | 2741868055 | Being intensively studied, visual tracking has seen great recent advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing from the success of parallel tracking and mapping in visual SLAM. Our PTAV framework typically consists of two components, a tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V checks the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. In our extensive experiments on popular benchmarks including OTB2013, OTB2015, TC128 and UAV20L, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact performs even better than many deep learning based solutions. Moreover, as a general framework, PTAV is very flexible and has great rooms for improvement and generalization. | Interestingly, tracking by itself can be also formulated as a verification problem that finds the best candidate similar to the target @cite_31 @cite_17 . Bertinetto @cite_17 propose a fully-convolutional Siamese networks for visual tracking. In @cite_31 , Tao formulate tracking as object matching in each frame by Siamese networks. Despite obtaining excellent performance, application of such trackers is severely restricted by the heavy computation for extracting deep features in each frame. By contrast, our solution only treats verification as a way to check and correct the , and does not run verification per frame. | {
"cite_N": [
"@cite_31",
"@cite_17"
],
"mid": [
"2952558221",
"2951584184"
],
"abstract": [
"In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks."
]
} |
1708.00276 | 2952428427 | We present a simple distributed @math -approximation algorithm for maximum weight independent set (MaxIS) in the @math model which completes in @math rounds, where @math is the maximum degree, @math is the number of rounds needed to compute a maximal independent set (MIS) on @math , and @math is the maximum weight of a node. Whether our algorithm is randomized or deterministic depends on the MIS algorithm used as a black-box. Plugging in the best known algorithm for MIS gives a randomized solution in @math rounds, where @math is the number of nodes. We also present a deterministic @math -round algorithm based on coloring. We then show how to use our MaxIS approximation algorithms to compute a @math -approximation for maximum weight matching without incurring any additional round penalty in the @math model. We use a known reduction for simulating algorithms on the line graph while incurring congestion, but we show our algorithm is part of a broad family of for which we describe a mechanism that allows the simulation to run in the @math model without an additional overhead. Next, we show that for maximum weight matching, relaxing the approximation factor to ( @math ) allows us to devise a distributed algorithm requiring @math rounds for any constant @math . For the unweighted case, we can even obtain a @math -approximation in this number of rounds. These algorithms are the first to achieve the provably optimal round complexity with respect to dependency on @math . | As for the distributed case, @cite_10 @cite_13 give a lower bound of @math rounds for any deterministic algorithm approximating MaxIS, while @cite_41 provide randomized and deterministic approximations for planar graphs. In @cite_40 , an @math -round @math randomized algorithm for @math -approximation is presented for the unweighted case, along with a matching lower bound. | {
"cite_N": [
"@cite_13",
"@cite_41",
"@cite_40",
"@cite_10"
],
"mid": [
"",
"1869515244",
"2503706701",
"1502920553"
],
"abstract": [
"",
"We give deterministic distributed algorithms that given i¾?> 0 find in a planar graph G, (1±i¾?)-approximations of a maximum independent set, a maximum matching, and a minimum dominating set. The algorithms run in O(log*|G|) rounds. In addition, we prove that no faster deterministic approximation is possible and show that if randomization is allowed it is possible to beat the lower bound for deterministic algorithms.",
"We show that the first phase of the Linial-Saks network decomposition algorithm gives a randomized distributed O(ne)-approximation algorithm for the maximum independent set problem that operates in O(1 e) rounds, and we give a matching lower bound that holds even for bipartite graphs.",
"In this paper we extend the lower bound technique by Linial for local coloring and maximal independent sets. We show that constant approximations to maximum independent sets on a ring require at least log-star time. More generally, the product of approximation quality and running time cannot be less than log-star. Using a generalized ring topology, we gain identical lower bounds for approximations to minimum dominating sets. Since our generalized ring topology is contained in a number of geometric graphs such as the unit disk graph, our bounds directly apply as lower bounds for quite a few algorithmic problems in wireless networking. Having in mind these and other results about local approximations of maximum independent sets and minimum dominating sets, one might think that the former are always at least as difficult to obtain as the latter. Conversely, we show that graphs exist, where a maximum independent set can be determined without any communication, while finding even an approximation to a minimum dominating set is as hard as in general graphs."
]
} |
1708.00276 | 2952428427 | We present a simple distributed @math -approximation algorithm for maximum weight independent set (MaxIS) in the @math model which completes in @math rounds, where @math is the maximum degree, @math is the number of rounds needed to compute a maximal independent set (MIS) on @math , and @math is the maximum weight of a node. Whether our algorithm is randomized or deterministic depends on the MIS algorithm used as a black-box. Plugging in the best known algorithm for MIS gives a randomized solution in @math rounds, where @math is the number of nodes. We also present a deterministic @math -round algorithm based on coloring. We then show how to use our MaxIS approximation algorithms to compute a @math -approximation for maximum weight matching without incurring any additional round penalty in the @math model. We use a known reduction for simulating algorithms on the line graph while incurring congestion, but we show our algorithm is part of a broad family of for which we describe a mechanism that allows the simulation to run in the @math model without an additional overhead. Next, we show that for maximum weight matching, relaxing the approximation factor to ( @math ) allows us to devise a distributed algorithm requiring @math rounds for any constant @math . For the unweighted case, we can even obtain a @math -approximation in this number of rounds. These algorithms are the first to achieve the provably optimal round complexity with respect to dependency on @math . | The first distributed algorithm that uses the local ratio technique is due to @cite_7 . The local ratio technique was also used in @cite_34 to compute a distributed @math -approximation for weighted vertex cover. In @cite_12 , a similar technique of weight grouping is used in the primal-dual framework for scheduling. | {
"cite_N": [
"@cite_34",
"@cite_12",
"@cite_7"
],
"mid": [
"1971749164",
"2074104565",
""
],
"abstract": [
"We obtain improved algorithms for finding small vertex covers in bounded degree graphs and hypergraphs. We use semidefinite programming to relax the problems and introduce new rounding techniques for these relaxations. On graphs with maximum degree at most @math , the algorithm achieves a performance ratio of @math for large @math , which improves the previously known ratio of @math obtained by Halld orsson and Radhakrishnan. Using similar techniques, we also present improved approximations for the vertex cover problem in hypergraphs. For k-uniform hypergraphs with n vertices, we achieve a ratio of @math for large n, and for k-uniform hypergraphs with maximum degree at most @math the algorithm achieves a ratio of @math for large @math . These results considerably improve the previous best ratio of @math for bounded degree k-uniform hypergraphs, and @math for general k-uniform hypergraphs, both obtained by Krivelevich. Using similar techniques, we also obtain an approximation algorithm for the weighted independent set problem, matching a recent result of Halldorsson.",
"In this paper we give an efficient distributed algorithm computing approximate solutions to a very general, and classical, scheduling problem. The approximation guarantee is within a constant factor of the optimum. By \"efficient\", we mean that the number of communication rounds is poly-logarithmic in the size of the input. In the problem, we have a bipartite graph with computing agents on one side and resources on the other. Agents that share a resource can communicate in one time step. Each agent has a list of jobs, each with its own length and profit, to be executed on a neighbouring resource within a given time-window. Resources can execute non preemptively only one job at a time. The goal is to maximize the profit of the jobs that are scheduled. It is well known that this problem is NP-hard. A very interesting feature of our algorithm is that it is derived in a systematic manner from a primal-dual algorithm.",
""
]
} |
1708.00376 | 2741175092 | Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions. We propose to learn high level functional programs in order to represent abstract models which capture the invariant structure in the observed data. We introduce the @math -machine (program-induction machine) -- an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimisation procedure for program learning based on backpropagation, gradient descent and A* search. We apply the proposed method to two problems: system identification of dynamical systems and explaining the behaviour of a DQN agent. Our results show that the @math -machine can efficiently induce interpretable programs from individual data traces. | Determining how many input output examples or execution traces are required in order to generalise well is still an open research problem. However, in this paper, we focus attention more on the explanatory power afforded by programs rather than on the broader problems of generalisation in the space of programs. While these characteristics are of course related, we take a view similar to that of @cite_0 , arguing that it is possible to build from locally valid program fragments which provide useful insight into the black-box processes generating the data. By combining gradient descent and A* search the @math -machine is able to learn informative and interpretable high-level LISP-like programs, even just from a single observation trace. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2282821441"
],
"abstract": [
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted."
]
} |
1708.00415 | 2739893875 | Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoder-decoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treen-bank, our framework obtains competitive performance on constituency parsing while matching the state-of-the-art single-model language modeling score. | Our framework is related to a class of variational autoencoders @cite_0 , which use neural networks for posterior approximation in variational inference. This technique has been previously used for topic modeling @cite_9 and sentence compression @cite_7 . Another interpretation of the proposed framework is from the perspective of guided policy search in reinforcement learning @cite_12 , where a generative parser is trained to imitate the trace of a discriminative parser. Further connections can be drawn with the importance-sampling based inference of . There, a generative RNNG and a discriminative RNNG are trained separately; during language modeling, the output of the discriminative model serves as the proposal distribution of an importance sampler @math . Compared to their work, we unify the generative and discriminative RNNGs in a single framework, and adopt a joint training objective. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_12",
"@cite_7"
],
"mid": [
"",
"2173681125",
"590442793",
"2951652470"
],
"abstract": [
"",
"Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bag-of-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.",
"We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation - perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [9]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.",
"In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data."
]
} |
1708.00079 | 2739511282 | In this work, we propose an efficient and effective approach for unconstrained salient object detection in images using deep convolutional neural networks. Instead of generating thousands of candidate bounding boxes and refining them, our network directly learns to generate the saliency map containing the exact number of salient objects. During training, we convert the ground-truth rectangular boxes to Gaussian distributions that better capture the ROI regarding individual salient objects. During inference, the network predicts Gaussian distributions centered at salient objects with an appropriate covariance, from which bounding boxes are easily inferred. Notably, our network performs saliency map prediction without pixel-level annotations, salient object detection without object proposals, and salient object subitizing simultaneously, all in a single pass within a unified framework. Extensive experiments show that our approach outperforms existing methods on various datasets by a large margin, and achieves more than 100 fps with VGG16 network on a single GPU during inference. | aims to mark important regions by rectangles in an image. Early works assume that there is only one dominant object in an image and utilize various hand-crafted features to detect salient objects @cite_43 @cite_26 . Salient objects are segmented out by a CRF model @cite_43 or bounding box statistics learned from a large image database @cite_26 . Some works @cite_0 @cite_8 demonstrate the ability of generating multiple overlapping bounding boxes in a single scene by combining multiple image features. Recently, Zhang al @cite_33 apply deep networks with object proposals to achieve state-of-the-art results. However, these methods are not scalable for real-time applications due to the use of sliding windows, complex optimization or expensive box sampling process. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_8",
"@cite_0",
"@cite_43"
],
"mid": [
"2154135778",
"2422471819",
"2107200795",
"2090463878",
"1996326832"
],
"abstract": [
"In this paper, we deal with the problem of detecting the existence and the location of salient objects for thumbnail images on which most search engines usually perform visual analysis in order to handle web-scale images. Different from previous techniques, such as sliding window-based or segmentation-based schemes for detecting salient objects, we propose to use a learning approach, random forest in our solution. Our algorithm exploits global features from multiple saliency indicators to directly predict the existence and the position of the salient object. To validate our algorithm, we constructed a large image database collected from Bing image search, that contains hundreds of thousands of manually labeled web images. The experimental results using this new database and the resized MSRA database [16] demonstrate that our algorithm outperforms previous state-of-the-art methods.",
"We aim at detecting salient objects in unconstrained images. In unconstrained images, the number of salient objects (if any) varies from image to image, and is not given. We present a salient object detection system that directly outputs a compact set of detection windows, if any, for an input image. Our system leverages a Convolutional-Neural-Network model to generate location proposals of salient objects. Location proposals tend to be highly overlapping and noisy. Based on the Maximum a Posteriori principle, we propose a novel subset optimization framework to generate a compact set of detection windows out of noisy proposals. In experiments, we show that our subset optimization formulation greatly enhances the performance of our system, and our system attains 16-34 relative improvement in Average Precision compared with the state-of-the-art on three challenging salient object datasets.",
"We propose a principled probabilistic formulation of object saliency as a sampling problem. This novel formulation allows us to learn, from a large corpus of unlabelled images, which patches of an image are of the greatest interest and most likely to correspond to an object. We then sample the object saliency map to propose object locations. We show that using only a single object location proposal per image, we are able to correctly select an object in over 42 of the images in the Pascal VOC 2007 dataset, substantially outperforming existing approaches. Furthermore, we show that our object proposal can be used as a simple unsupervised approach to the weakly supervised annotation problem. Our simple unsupervised approach to annotating objects of interest in images achieves a higher annotation accuracy than most weakly supervised approaches.",
"Conventional saliency analysis methods measure the saliency of individual pixels. The resulting saliency map inevitably loses information in the original image and finding salient objects in it is difficult. We propose to detect salient objects by directly measuring the saliency of an image window in the original image and adopt the well established sliding window based object detection paradigm.",
"In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach."
]
} |
1708.00079 | 2739511282 | In this work, we propose an efficient and effective approach for unconstrained salient object detection in images using deep convolutional neural networks. Instead of generating thousands of candidate bounding boxes and refining them, our network directly learns to generate the saliency map containing the exact number of salient objects. During training, we convert the ground-truth rectangular boxes to Gaussian distributions that better capture the ROI regarding individual salient objects. During inference, the network predicts Gaussian distributions centered at salient objects with an appropriate covariance, from which bounding boxes are easily inferred. Notably, our network performs saliency map prediction without pixel-level annotations, salient object detection without object proposals, and salient object subitizing simultaneously, all in a single pass within a unified framework. Extensive experiments show that our approach outperforms existing methods on various datasets by a large margin, and achieves more than 100 fps with VGG16 network on a single GPU during inference. | have been used widely in object detection, which are either generated from grouping superpixels @cite_36 @cite_47 @cite_42 or sliding windows @cite_17 @cite_16 . However, it is a bottleneck to generate a large number of proposals for real-time detection @cite_46 @cite_41 . Recently, deep networks are trained to generate proposals in an end-to-end manner to improve efficiency @cite_14 @cite_12 . While both SSD @cite_48 and YOLO @cite_6 instead adopt grid structure to generate candidate boxes, they still rely on a smaller set of proposals. Different from previous methods, our approach does not use any proposals. | {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_46",
"@cite_41",
"@cite_42",
"@cite_48",
"@cite_6",
"@cite_47",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2949150497",
"1991367009",
"",
"2102605133",
"2088049833",
"2193145675",
"",
"2046382188",
"7746136",
"2953106684",
"2066624635"
],
"abstract": [
"Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"We present a novel framework to generate and rank plausible hypotheses for the spatial extent of objects in images using bottom-up computational processes and mid-level selection cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of Constrained Parametric Min-Cut problems (CPMC) on a regular image grid. In a subsequent step, we learn to rank the corresponding segments by training a continuous model to predict how likely they are to exhibit real-world regularities (expressed as putative overlap with ground truth) based on their mid-level region properties, then diversify the estimated overlap score using maximum marginal relevance measures. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC 2009 and 2010 data sets. In our companion papers [1], [2], we show that the algorithm can be used, successfully, in a segmentation-based visual object category recognition pipeline. This architecture ranked first in the VOC2009 and VOC2010 image segmentation and labeling challenges.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image."
]
} |
1708.00079 | 2739511282 | In this work, we propose an efficient and effective approach for unconstrained salient object detection in images using deep convolutional neural networks. Instead of generating thousands of candidate bounding boxes and refining them, our network directly learns to generate the saliency map containing the exact number of salient objects. During training, we convert the ground-truth rectangular boxes to Gaussian distributions that better capture the ROI regarding individual salient objects. During inference, the network predicts Gaussian distributions centered at salient objects with an appropriate covariance, from which bounding boxes are easily inferred. Notably, our network performs saliency map prediction without pixel-level annotations, salient object detection without object proposals, and salient object subitizing simultaneously, all in a single pass within a unified framework. Extensive experiments show that our approach outperforms existing methods on various datasets by a large margin, and achieves more than 100 fps with VGG16 network on a single GPU during inference. | addresses the object problem by learning an external binary classifier @cite_31 @cite_26 . Zhang al @cite_18 present a salient object subitizing model to remove detected boxes in images with no salient object. While the method in @cite_33 addresses and problems at the same time, it still requires generating proposals recursively, which is inefficient. | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_18",
"@cite_33"
],
"mid": [
"2125554047",
"2154135778",
"2952588030",
"2422471819"
],
"abstract": [
"In robotics and computer vision, saliency maps are frequently used to identify regions that contain potential objects of interest and to restrict object detection to those regions only. However, common saliency approaches do not provide information as to whether there really is an interesting object triggering saliency and therefore tend to highlight needless background as potential regions of interest. This paper addresses the problem by exploiting histogram features extracted from saliency maps to predict the existence of interesting objects in images and to quickly prune uninteresting images. To validate our approach, we constructed a database that consists of 1000 background and object images captured in the working environment of our robot. Experimental results demonstrate that our approach achieves good detection performance and outperforms an existing existence detection approach [1].",
"In this paper, we deal with the problem of detecting the existence and the location of salient objects for thumbnail images on which most search engines usually perform visual analysis in order to handle web-scale images. Different from previous techniques, such as sliding window-based or segmentation-based schemes for detecting salient objects, we propose to use a learning approach, random forest in our solution. Our algorithm exploits global features from multiple saliency indicators to directly predict the existence and the position of the salient object. To validate our algorithm, we constructed a large image database collected from Bing image search, that contains hundreds of thousands of manually labeled web images. The experimental results using this new database and the resized MSRA database [16] demonstrate that our algorithm outperforms previous state-of-the-art methods.",
"We study the problem of Salient Object Subitizing, i.e. predicting the existence and the number of salient objects in an image using holistic cues. This task is inspired by the ability of people to quickly and accurately identify the number of items within the subitizing range (1-4). To this end, we present a salient object subitizing image dataset of about 14K everyday images which are annotated using an online crowdsourcing marketplace. We show that using an end-to-end trained Convolutional Neural Network (CNN) model, we achieve prediction accuracy comparable to human performance in identifying images with zero or one salient object. For images with multiple salient objects, our model also provides significantly better than chance performance without requiring any localization process. Moreover, we propose a method to improve the training of the CNN subitizing model by leveraging synthetic images. In experiments, we demonstrate the accuracy and generalizability of our CNN subitizing model and its applications in salient object detection and image retrieval.",
"We aim at detecting salient objects in unconstrained images. In unconstrained images, the number of salient objects (if any) varies from image to image, and is not given. We present a salient object detection system that directly outputs a compact set of detection windows, if any, for an input image. Our system leverages a Convolutional-Neural-Network model to generate location proposals of salient objects. Location proposals tend to be highly overlapping and noisy. Based on the Maximum a Posteriori principle, we propose a novel subset optimization framework to generate a compact set of detection windows out of noisy proposals. In experiments, we show that our subset optimization formulation greatly enhances the performance of our system, and our system attains 16-34 relative improvement in Average Precision compared with the state-of-the-art on three challenging salient object datasets."
]
} |
1708.00079 | 2739511282 | In this work, we propose an efficient and effective approach for unconstrained salient object detection in images using deep convolutional neural networks. Instead of generating thousands of candidate bounding boxes and refining them, our network directly learns to generate the saliency map containing the exact number of salient objects. During training, we convert the ground-truth rectangular boxes to Gaussian distributions that better capture the ROI regarding individual salient objects. During inference, the network predicts Gaussian distributions centered at salient objects with an appropriate covariance, from which bounding boxes are easily inferred. Notably, our network performs saliency map prediction without pixel-level annotations, salient object detection without object proposals, and salient object subitizing simultaneously, all in a single pass within a unified framework. Extensive experiments show that our approach outperforms existing methods on various datasets by a large margin, and achieves more than 100 fps with VGG16 network on a single GPU during inference. | produces a binary mask to segment salient objects from background. While both bottom-up methods using low-level image features @cite_38 @cite_13 @cite_13 @cite_40 @cite_19 and top-down methods @cite_43 @cite_26 have been proposed for decades, many recent works utilize deep neural networks for this task @cite_1 @cite_37 @cite_3 @cite_39 @cite_49 @cite_29 . Li al @cite_37 propose a model for visual saliency using multi-scale deep features computed by CNNs. Wang al @cite_3 develop two deep neural networks to learn local features and global contrast with geometric features to predict saliency score of each region. In @cite_1 , both global and local context are combined into a single deep network, while a fully convolutional network is applied in @cite_49 . Note that existing methods heavily rely on pixel-level annotations @cite_1 @cite_39 @cite_49 or external semantic information, , superpixels @cite_29 , which is not feasible for large-scale problems, where human labeling is extremely sparse. In contrast, our approach, as a weakly-supervised approach, only requires bounding box annotations and produces promising results as a free by-product, along with salient object detection and subitizing. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_26",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_40",
"@cite_43",
"@cite_49",
"@cite_13"
],
"mid": [
"",
"1894057436",
"2154135778",
"2147347517",
"1942214758",
"1947031653",
"2461475918",
"2128340050",
"2037954058",
"1996326832",
"2519528544",
"2039313011"
],
"abstract": [
"",
"Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.",
"In this paper, we deal with the problem of detecting the existence and the location of salient objects for thumbnail images on which most search engines usually perform visual analysis in order to handle web-scale images. Different from previous techniques, such as sliding window-based or segmentation-based schemes for detecting salient objects, we propose to use a learning approach, random forest in our solution. Our algorithm exploits global features from multiple saliency indicators to directly predict the existence and the position of the salient object. To validate our algorithm, we constructed a large image database collected from Bing image search, that contains hundreds of thousands of manually labeled web images. The experimental results using this new database and the resized MSRA database [16] demonstrate that our algorithm outperforms previous state-of-the-art methods.",
"A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.",
"Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.",
"Traditional1 salient object detection models often use hand-crafted features to formulate contrast and various prior knowledge, and then combine them artificially. In this work, we propose a novel end-to-end deep hierarchical saliency network (DHSNet) based on convolutional neural networks for detecting salient objects. DHSNet first makes a coarse global prediction by automatically learning various global structured saliency cues, including global contrast, objectness, compactness, and their optimal combination. Then a novel hierarchical recurrent convolutional neural network (HRCNN) is adopted to further hierarchically and progressively refine the details of saliency maps step by step via integrating local context information. The whole architecture works in a global to local and coarse to fine manner. DHSNet is directly trained using whole images and corresponding ground truth saliency masks. When testing, saliency maps can be generated by directly and efficiently feedforwarding testing images through the network, without relying on any other techniques. Evaluations on four benchmark datasets and comparisons with other 11 state-of-the-art algorithms demonstrate that DHSNet not only shows its significant superiority in terms of performance, but also achieves a real-time speed of 23 FPS on modern GPUs.",
"In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.",
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach.",
"Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance.",
"Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with super pixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field."
]
} |
1708.00072 | 2739616762 | The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components. | The automata formalism used in this paper generalizes @cite_26 @cite_4 . The latter were originally proposed to give descriptions of Web Services @cite_4 ; in @cite_32 , they were used to model fault-tolerant, compositional autonomous agents. Using preference values to specify the behavior of autonomous agents is also explored from the perspective of rewriting logic in the @cite_31 @cite_18 . Recent experiments with the Soft Agent Framework show that behavior based on soft constraints can indeed contribute robustness @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_32",
"@cite_31",
"@cite_12"
],
"mid": [
"2470154314",
"1988680012",
"1665497870",
"2565998050",
"71507039",
"2786468265"
],
"abstract": [
"We are interested in systems of cyber-physical agents that operate in unpredictable, possibly hostile, environments using locally obtainable information. How can we specify robust agents that are able to operate alone and or in cooperation with other agents? What properties are important? How can they be verified? In this tutorial we describe a framework called Soft Agents, formalized in the Maude rewriting logic system. Features of the framework include: explicit representation of the physical state as well as the cyber perception of this state; robust communication via sharing of partially ordered knowledge, and robust behavior based on soft constraints. Using Maude functionality, the soft agent framework supports experimenting with, formally testing, and reasoning about specifications of agent systems. The tutorial begins with a discussion of desiderata for soft agent models. Use of the soft agent framework for specification and formal analysis of agent systems illustrated in some detail by a case-study involving simple patrolling bots. A more complex case study involving surveillance drones is also discussed.",
"In this paper we introduce constraint automata and propose them as an operational model for Reo, an exogenous coordination language for compositional construction of component connectors based on a calculus of channels. By providing composition operators for constraint automata and defining notions of equivalence and refinement relations for them, this paper covers the foundations for building tools to address concerns such as the automated construction of the automaton for a given component connector, equivalence checking or containment checking of the behavior of two given connectors, and verification of coordination mechanisms.",
"We extend Constraint Automata by replacing boolean constraints with semiring-based soft constraints. The obtained general formal tool can be used to represent preference-based and similarity-based queries, which allow a user more freedom in choosing the behavior of the service to finally use, among all possible choices. A user states his preferences through a “soft” query, and obtains results that satisfy this query with different levels of preference. The soft requirements may involve a parameter data of the service operations, or the (names of the) operations themselves. Moreover, we introduce a first implementation of the search procedure by using declarative (soft) Constraint Programming.",
"htmlabstractA formal description of a Cyber-Physical system should include a rigorous specification of the computational and physical components involved, as well as their interaction. Such a description, thus, lends itself to a compositional model where every module in the model specifies the behavior of a (computational or physical) component or the interaction between different components. We propose a framework based on Soft Constraint Automata that facilitates the component-wise description of such systems and includes the tools necessary to compose subsystems in a meaningful way, to yield a description of the entire system. Most importantly, Soft Constraint Automata allow the description and composition of components’ preferences as well as environmental constraints in a uniform fashion. We illustrate the utility of our framework using a detailed description of a patrolling robot, while highlighting methods of composition as well as possible techniques to employ them.",
"We are interested in principles for designing and building open distributed systems consisting of multiple cyber-physical agents, specifically, where a coherent global view is unattainable and timely consensus is impossible. Such agents attempt to contribute to a system goal by making local decisions to sense and effect their environment based on local information. In this paper we propose a model, formalized in the Maude rewriting logic system, that allows experimenting with and reasoning about designs of such systems. Features of the model include communication via sharing of partially ordered knowledge, making explicit the physical state as well as the cyber perception of this state, and the use of a notion of soft constraints developed by Martin Wirsing and his team to specify agent behavior. The paper begins with a discussion of desiderata for such models and concludes with a small case study to illustrate the use of the modeling framework.",
"Unmanned aerial vehicles (UAVs), a.k.a. drones, are becoming increasingly popular due to great advancements in their control mechanisms and price reduction. UAVs are being used in applications such as package delivery, plantation and railroad track monitoring, where UAVs carry out tasks in an automated fashion. Devising how UAVs achieve a task is challenging as the environment where UAVs are deployed is normally unpredictable, for example, due to winds. Formal methods can help engineers to specify flight strategies and to evaluate how well UAVs are going to perform to achieve a task. This paper proposes a formal framework where engineers can raise the confidence in their UAV specification by using symbolic, simulation and statistical and model checking methods. Our framework is constructed over three main components: the behavior of UAVs and the environment are specified in a formal executable language; the UAV’s physical model is specified by a simulator; and statistical model checking algorithms are used for the analysis of system behaviors. We demonstrate the effectiveness of our framework by means of several scenarios involving multiple drones."
]
} |
1708.00072 | 2739616762 | The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components. | @cite_3 discuss methods to detect unobservable errors based on a model of the system and a trace of observable events; others extended this approach @cite_22 @cite_29 to a multi-component setting. @cite_33 wrote about fault localisation in a system where some components are inobservable, based on which computations (tasks involving multiple components) fail. In these paradigms, one tries to find out where a occurs; in contrast, we try to find out which component is responsible for , i.e., behavior that is allowed by the system but not desired by the specification. | {
"cite_N": [
"@cite_29",
"@cite_33",
"@cite_22",
"@cite_3"
],
"mid": [
"1988549452",
"1973871646",
"2504811584",
"2099395647"
],
"abstract": [
"Abstract Automata-theoretic models have been used successfully in model-based process supervision and diagnosis. From a practical viewpoint, their main drawback is their complexity, which increases fast with the size of the original discrete-event system. This complexity can be reduced by compositional modelling resulting in an automata network. The reduced complexity of the network leads to a complexity reduction of the diagnostic algorithm, as the fault diagnosis can be performed in a decentralised way. The paper develops such a diagnostic method for nondeterministic and stochastic automata networks.",
"Availability is an increasingly important quality for today's software-based systems and it has been successfully addressed by the use of closed-loop control systems in self-adaptive systems. Probes are inserted into a running system to obtain information and the information is fed to a controller that, through provided interfaces, acts on the system to alter its behavior. When a failure is detected, pinpointing the source of the failure is a critical step for a repair action. However, information obtained from a running system is commonly incomplete due to probing costs or unavailability of probes. In this paper we address the problem of fault localization in the presence of incomplete system monitoring. We may not be able to directly observe a component but we may be able to infer its health state. We provide formal criteria to determine when health states of unobservable components can be inferred and establish formal theoretical bounds for accuracy when using any spectrum-based fault localization algorithm.",
"We address the problem of failure diagnosis in discrete event systems with decentralized information. We propose a coordinated decentralized architecture consisting of local sites communicating with a coordinator that is responsible for diagnosing the failures occurring in the system. We extend the notion of diagnosability, originally introduced in (1995) for centralized systems, to the proposed coordinated decentralized architecture. We specify three protocols, i.e. the diagnostic information generated at the local sites, the communication rules used by the local sites, and the coordinator's decision rule, that realize the proposed architecture. We analyze the diagnostic properties of each protocol. We also state and prove necessary and sufficient conditions for a language to be diagnosable under each protocol. These conditions are checkable off-line. The online diagnostic process is carried out using the diagnosers introduced in the above article or a slight variation of these diagnosers. The key features of the proposed protocols are: (i) they achieve, each under a set of assumptions, the same diagnostic performance as the centralized diagnoser; and (ii) they highlight the performance vs. complexity tradeoff that arises in coordinated decentralized architectures. The correctness of two of the protocols relies on some stringent global ordering assumptions on message reception at the coordinator's site, the relaxation of which is briefly discussed.",
"Detection and isolation of failures in large, complex systems is a crucial and challenging task. The increasingly stringent requirements on performance and reliability of complex technological systems have necessitated the development of sophisticated and systematic methods for the timely and accurate diagnosis of system failures. We propose a discrete-event systems (DES) approach to the failure diagnosis problem. This approach is applicable to systems that fall naturally in the class of DES; moreover, for the purpose of diagnosis, continuous-variable dynamic systems can often be viewed as DES at a higher level of abstraction. We present a methodology for modeling physical systems in a DES framework and illustrate this method with examples. We discuss the notion of diagnosability, the construction procedure of the diagnoser, and necessary and sufficient conditions for diagnosability. Finally, we illustrate our approach using realistic models of two different heating, ventilation, and air conditioning (HVAC) systems, one diagnosable and the other not diagnosable. While the modeling methodology presented here has been developed for the purpose of failure diagnosis, its scope is not restricted to this problem; it can also be used to develop DES models for other purposes such as control."
]
} |
1708.00072 | 2739616762 | The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components. | A general framework for fault ascription in concurrent systems based on is presented in @cite_30 @cite_25 . Formal definitions are given for failures in a given set of components to be necessary and or sufficient cause of a system violating a given property. Components are specified by sets of sets of events (analogous to actions) representing possible correct behaviors. A parallel (asynchronous) composition operation is defined on components, but there is no notion of composition of events or explicit interaction between components. A system is given by a global behavior (a set of event sets) together with a set of system component specifications. The global behavior, which must be provided separately, includes component events, but may also have other events, and may violate component specifications (hence the faulty components). In our approach, global behavior is obtained by component composition. Undesired behavior may be local to a component or emerge as the result of interactions. | {
"cite_N": [
"@cite_30",
"@cite_25"
],
"mid": [
"2107934344",
"2399784584"
],
"abstract": [
"In component-based safety-critical real-time systems it is crucial to determine which component(s) caused the violation of a required system-level safety property, be it to issue a precise alert, or to determine liability of component providers. In this paper we present an approach for blaming in real-time systems whose component specifications are given as timed automata. The analysis is based on a single execution trace violating a safety property P. We formalize blaming using counterfactual reasoning (\"what would have been the outcome if component C had behaved correctly?\") to distinguish component failures that actually contributed to the outcome from failures that had no impact on the violation of P. We then show how to effectively implement blaming by reducing it to a model-checking problem for timed automata, and demonstrate the feasibility of our approach on the models of a pacemaker and of a chemical reactor.",
"Fault diagnosis is becoming increasingly important and difficult with the growing pervasiveness and complexity of computer systems. We propose in this paper a general semantic framework for fault ascription, a precise form of fault diagnosis that relies on counterfactual analysis for identifying necessary and sufficient causes of faults in component-based systems. Our framework relies on configuration structures to handle concurrent systems, partial and distributed observations in a uniform way. It defines basic conditions for a counterfactual analysis of necessary and sufficient causes, and it presents a refined analysis that conforms to our basic conditions while avoiding various infelicities."
]
} |
1708.00072 | 2739616762 | The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components. | In LTL, a counterexample to a negative result arises naturally if one employs automata-based verification techniques @cite_23 @cite_11 . In this paper, we further exploit counterexamples to gain information about the component or components involved in violating the specification. The application of LTL to Constraint Automata is inspired by an earlier use of LTL for Constraint Automata @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_11"
],
"mid": [
"1556837901",
"2152700389",
"1512310098"
],
"abstract": [
"The feasibility of formal methods for the analysis of complex systems crucially depends on a modeling framework that supports compositional design, stepwise refinement and abstractions. An important feature is the clear separation of coordination and computation which permits to apply various verification techniques for the computation performed by components and interactions as well as dependencies between the components. We report here on a model-checking approach using the tool Vereofy that is based on an exogenous coordination model, where the components are represented by their behavioral interfaces. Vereofy supports the verification of the components and their communication structure. Our approach is illustrated by means of a case study with a sensor network where Vereofy has been used to establish several properties of the sensor nodes and their routing procedures.",
"The authors give a very simple uniform explanation of the persistence of exponential decidability. They follow M. Vardi and P. Wolper's theory (1986) that given a formula gamma of a temporal or dynamic logic, it is important to construct an equivalent automation M sub gamma . They characterize the weak monadic theory of the tree; it turns out that weak alternating automata greatly simplify design procedures. >",
"The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus, programs and specifications can be viewed as descriptions of languages over some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages. By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata."
]
} |
1708.00225 | 2949764466 | Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers. | There are extensive surveys of visual tracking in the literature @cite_24 @cite_38 @cite_23 . In this section, we mainly discuss tracking methods that are based on correlation filters and CNNs . | {
"cite_N": [
"@cite_24",
"@cite_38",
"@cite_23"
],
"mid": [
"2158592639",
"1985560977",
"2126302311"
],
"abstract": [
"Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"Long-term video tracking is of great importance for many applications in real-world scenarios. A key component for achieving long-term tracking is the tracker's capability of updating its internal representation of targets (the appearance model) to changing conditions. Given the rapid but fragmented development of this research area, we propose a unified conceptual framework for appearance model adaptation that enables a principled comparison of different approaches. Moreover, we introduce a novel evaluation methodology that enables simultaneous analysis of tracking accuracy and tracking success, without the need of setting application-dependent thresholds. Based on the proposed framework and this novel evaluation methodology, we conduct an extensive experimental comparison of trackers that perform appearance model adaptation. Theoretical and experimental analyses allow us to identify the most effective approaches as well as to highlight design choices that favor resilience to errors during the update process. We conclude the paper with a list of key open research challenges that have been singled out by means of our experimental comparison.",
"There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers."
]
} |
1708.00225 | 2949764466 | Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers. | Tracking by Correlation Filters. Correlation filters for visual tracking have attracted considerable attention due to the computational efficiency in the Fourier domain. Tracking methods based on correlation filters regress all the circular-shifted versions of the input features to a Gaussian function. They do not need multiple samples of target appearance. The MOSSE tracker @cite_13 encodes target appearance through an adaptive correlation filter by optimizing the output sum of squared error. Several extensions have been proposed to considerably improves tracking accuracy. The examples include kernelized correlation filters @cite_14 , multiple dimensional features @cite_39 @cite_30 , context learning @cite_28 , scale estimation @cite_18 , re-detection @cite_41 , subspace learning @cite_22 , short-term and long-term memory @cite_15 , reliable collection @cite_20 and spatial regularization @cite_44 . Different from existing correlation filters based frameworks that formulate correlation operation as an element wise multiplication in the Fourier domain, we formulate the correlation filter as a convolution operation in the spatial domain. It is presented by one convolutional layer in CNN. In this sense, we demonstrate that feature extraction, response generation as well as model update can be integrated into one network for end-to-end prediction and optimization. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_41",
"@cite_39",
"@cite_44",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"1997121481",
"161114242",
"1908905119",
"29474918",
"",
"",
"",
"1915599933",
"",
"1937954682"
],
"abstract": [
"",
"Robust scale estimation is a challenging problem in visual object tracking. Most existing methods fail to handle large scale variations in complex image sequences. This paper presents a novel approach for robust scale estimation in a tracking-by-detection framework. The proposed approach works by learning discriminative correlation filters based on a scale pyramid representation. We learn separate filters for translation and scale estimation, and show that this improves the performance compared to an exhaustive scale search. Our scale estimation approach is generic as it can be incorporated into any tracking method with no inherent scale estimation.Experiments are performed on 28 benchmark sequences with significant scale variations. Our results show that the proposed approach significantly improves the performance by 18.8 in median distance precision compared to our baseline. Finally, we provide both quantitative and qualitative comparison of our approach with state-of-the-art trackers in literature. The proposed method is shown to outperform the best existing tracker by 16.6 in median distance precision, while operating at real-time.",
"Recent years have seen greater interest in the use of discriminative classifiers in tracking systems, owing to their success in object detection. They are trained online with samples collected during tracking. Unfortunately, the potentially large number of samples becomes a computational burden, which directly conflicts with real-time requirements. On the other hand, limiting the samples may sacrifice performance. Interestingly, we observed that, as we add more and more samples, the problem acquires circulant structure. Using the well-established theory of Circulant matrices, we provide a link to Fourier analysis that opens up the possibility of extremely fast learning and detection with the Fast Fourier Transform. This can be done in the dual space of kernel machines as fast as with linear classifiers. We derive closed-form solutions for training and detection with several types of kernels, including the popular Gaussian and polynomial kernels. The resulting tracker achieves performance competitive with the state-of-the-art, can be implemented with only a few lines of code and runs at hundreds of frames-per-second. MATLAB code is provided in the paper (see Algorithm 1).",
"Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"In this paper, we present a simple yet fast and robust algorithm which exploits the dense spatio-temporal context for visual tracking. Our approach formulates the spatio-temporal relationships between the object of interest and its locally dense contexts in a Bayesian framework, which models the statistical correlation between the simple low-level features (i.e., image intensity and position) from the target and its surrounding regions. The tracking problem is then posed by computing a confidence map which takes into account the prior information of the target location and thereby alleviates target location ambiguity effectively. We further propose a novel explicit scale adaptation scheme, which is able to deal with target scale variations efficiently and effectively. The Fast Fourier Transform (FFT) is adopted for fast learning and detection in this work, which only needs 4 FFT operations. Implemented in MATLAB without code optimization, the proposed tracker runs at 350 frames per second on an i7 machine. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy and robustness.",
"",
"",
"",
"Variations in the appearance of a tracked object, such as changes in geometry photometry, camera viewpoint, illumination, or partial occlusion, pose a major challenge to object tracking. Here, we adopt cognitive psychology principles to design a flexible representation that can adapt to changes in object appearance during tracking. Inspired by the well-known Atkinson-Shiffrin Memory Model, we propose MUlti-Store Tracker (MUSTer), a dual-component approach consisting of short- and long-term memory stores to process target appearance memories. A powerful and efficient Integrated Correlation Filter (ICF) is employed in the short-term store for short-term tracking. The integrated long-term component, which is based on keypoint matching-tracking and RANSAC estimation, can interact with the long-term memory and provide additional information for output control. MUSTer was extensively evaluated on the CVPR2013 Online Object Tracking Benchmark (OOTB) and ALOV++ datasets. The experimental results demonstrated the superior performance of MUSTer in comparison with other state-of-art trackers.",
"",
"Most modern trackers typically employ a bounding box given in the first frame to track visual objects, where their tracking results are often sensitive to the initialization. In this paper, we propose a new tracking method, Reliable Patch Trackers (RPT), which attempts to identify and exploit the reliable patches that can be tracked effectively through the whole tracking process. Specifically, we present a tracking reliability metric to measure how reliably a patch can be tracked, where a probability model is proposed to estimate the distribution of reliable patches under a sequential Monte Carlo framework. As the reliable patches distributed over the image, we exploit the motion trajectories to distinguish them from the background. Therefore, the visual object can be defined as the clustering of homo-trajectory patches, where a Hough voting-like scheme is employed to estimate the target state. Encouraging experimental results on a large set of sequences showed that the proposed approach is very effective and in comparison to the state-of-the-art trackers. The full source code of our implementation will be publicly available."
]
} |
1708.00159 | 2739587484 | Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an l_p regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches. | Non-local means algorithm @cite_14 modifies the averaging step to be a weighted averaging, where the weights are given by the similarity measure. BM3D @cite_9 uses collaborative filtering of all the similar patches to achieve superior results. Weighted Nuclear Norm Minimization @cite_22 exploits the fact that set of similar patches would be of low rank if they were noise free. Simply solving for a set which gives a lower weighted nuclear norm removes the noise from the data. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_22"
],
"mid": [
"1485262465",
"",
"2048695508"
],
"abstract": [
"We propose an image denoising method that ex- ploits nonlocal image modeling, principal component analysis (PCA), and local shape-adaptive anisotropic estimation. The nonlocal modeling is exploited by grouping similar image patches in 3-D groups. The denoising is performed by shrinkage of the spectrum of a 3-D transform applied on such groups. The effectiveness of the shrinkage depends on the ability of the transform to sparsely represent the true-image data, thus separating it from the noise. We propose to improve the sparsity in two aspects. First, we employ image patches (neighborhoods) which can have data-adaptive shape. Second, we propose PCA on these adaptive-shape neighborhoods as part of the employed 3-D transform. The PCA bases are obtained by eigenvalue decompo- sition of empirical second-moment matrices that are estimated from groups of similar adaptive-shape neighborhoods. We show that the proposed method is competitive and outperforms some of the current best denoising methods, especially in preserving image details and introducing very few artifacts.",
"",
"As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality."
]
} |
1708.00159 | 2739587484 | Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an l_p regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches. | Assuming prior on image patches has lead to denoising methods which does not involve finding similar patches at all. K-SVD @cite_0 method applies a sparse dictionary model to noisy patches which essentially remove the noise from them. The sparse dictionary used in this method was learned' out of the large corpus of natural or clean images. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2153663612"
],
"abstract": [
"We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods"
]
} |
1708.00159 | 2739587484 | Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an l_p regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches. | The first attempt to learn a generic image prior was given by Product-of-Experts @cite_4 which was later extended to image denoising and inpainting by Field-of-Experts @cite_19 . Both methods involve learning a prior from a generic image database and then using the prior for iterating towards a noise free patch. Minimizing the expected Patch Log Likelihood @cite_1 also used a learned Gaussian mixture prior. | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_4"
],
"mid": [
"2172275395",
"2131686571",
"2079182758"
],
"abstract": [
"Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.",
"We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques.",
"It is possible to combine multiple probabilistic models of the same data by multiplying the probabilities together and then renormalizing. This is a very efficient way to model high-dimensional data which simultaneously satisfies many different low dimensional constraints. Each individual expert model can focus on giving high probability to data vectors that satisfy just one of the constraints. Data vectors that satisfy this one constraint but violate other constraints will be ruled out by their low probability under the other expert models. Training a product of models appears difficult because, in addition to maximizing the probabilities that the individual models assign to the observed data, it is necessary to make the models disagree on unobserved regions of the data space. However, if the individual models are tractable there is a fairly efficient way to train a product of models. This training algorithm suggests a biologically plausible way of learning neural population codes."
]
} |
1708.00159 | 2739587484 | Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an l_p regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches. | But with deep learning techniques, new methods are devised which can learn image prior implicitly as model parameters and simply compute the noise free patch. A network resembling fully convolutional network was used in @cite_18 to get a denoiser model. @cite_12 , a 5 layer fully connected network gave state-of-the-art performance. But both these models require different parameters to be specifically trained for each noise level. | {
"cite_N": [
"@cite_18",
"@cite_12"
],
"mid": [
"2098477387",
"2037642501"
],
"abstract": [
"We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well."
]
} |
1708.00159 | 2739587484 | Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an l_p regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches. | @cite_11 , the authors have used an end-to-end trainable network which uses Gaussian conditional random field. This model uses successive steps of denoising and noise parameter estimation to eventually give a model which can do blind denoising. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2963366932"
],
"abstract": [
"We propose a novel end-to-end trainable deep network architecture for image denoising based on a Gaussian Conditional Random Field (GCRF) model. In contrast to the existing discriminative denoising methods that train a separate model for each individual noise level, the proposed deep network explicitly models the input noise variance and hence is capable of handling a range of noise levels. Our deep network, which we refer to as deep GCRF network, consists of two sub-networks: (i) a parameter generation network that generates the pairwise potential parameters based on the noisy input image, and (ii) an inference network whose layers perform the computations involved in an iterative GCRF inference procedure. We train two deep GCRF networks (each network operates over a range of noise levels: one for low input noise levels and one for high input noise levels) discriminatively by maximizing the peak signal-to-noise ratio measure. Experiments on Berkeley segmentation and PASCALVOC datasets show that the proposed approach produces results on par with the state of-the-art without training a separate network for each individual noise level."
]
} |
1708.00315 | 2950359091 | Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired unpaired image-to-image translation, such as photo @math sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow @math sheep, motor @math bicycle, cat @math dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics. | There has been a large GAN-family methods since the seminal work by @cite_9 . Impressive progresses have been achieved on a wide variety of image generation @cite_0 @cite_19 @cite_31 , image editing @cite_22 , text generation @cite_16 and conditional image generation such as text2image @cite_20 , image inpainting @cite_26 , and image translation @cite_25 tasks. The key to GANs' success is the variants of adversarial loss that forces the synthesized images to be indistinguishable from real data distribution. To handle the well-known mode collapse issue of GAN and make its training more stable, diverse training objectives have been developed, such as Earth Mover Distance in WGAN @cite_23 , feature matching loss @cite_31 , loss-sensitive GAN @cite_14 . However, unlike existing GAN objectives that seek an appropriate criterion between synthesized samples and target outputs, we propose a tailored adversarial contrasting objective for image semantic manipulation. Our contrast-GAN is inspired by the strategy of learning by comparison, that is, aiming to learn the mapping function such that the semantic features of manipulated images are much closer to feature distributions of target domain than those of the original domain. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"2342877626",
"2580360036",
"2951021768",
"2099471712",
"2125389028",
"2530372461",
"",
"2432004435",
"2951684117",
"2552465644",
"2949999304"
],
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image also verify the interpretability of RTT-GAN.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions."
]
} |
1708.00315 | 2950359091 | Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired unpaired image-to-image translation, such as photo @math sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow @math sheep, motor @math bicycle, cat @math dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics. | GANs have shown great success on a variety of image-conditional models such as style transfer @cite_12 @cite_11 and general-purpose image-to-image translation @cite_25 . More recent approaches @cite_24 @cite_8 @cite_29 @cite_10 have tackled the unpaired setting for cross-domain image translation and also conducted experiments on simple semantic translation (e.g. horse @math zebra and apple @math orange), where only color and texture changes are required. Compared to prior approaches that only transfer low-level information, we focus on high-level semantic manipulation on images given a desired category. The unified mask-controllable contrast-GAN is introduced to disentangle image background with object parts, comprised by one shared conditional generator and several semantic-aware discriminators within an adversarial optimization. Our model can be posed as a general-purpose solution for high-level semantic manipulation, which can facilitate many image understanding task, such as unsupervised semi-supervised activity recognition and object recognition. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_24",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2608015370",
"2592480533",
"2605287558",
"2471149695",
"2552465644",
"2950689937",
"2605028456"
],
"abstract": [
"Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.",
"Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL .",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain @math to a target domain @math in the absence of paired examples. Our goal is to learn a mapping @math such that the distribution of images from @math is indistinguishable from the distribution @math using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping @math and introduce a cycle consistency loss to push @math (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"Many problems in image processing and computer vision (e.g. colorization, style transfer) can be posed as 'manipulating' an input image into a corresponding output image given a user-specified guiding signal. A holy-grail solution towards generic image manipulation should be able to efficiently alter an input image with any personalized signals (even signals unseen during training), such as diverse paintings and arbitrary descriptive attributes. However, existing methods are either inefficient to simultaneously process multiple signals (let alone generalize to unseen signals), or unable to handle signals from other modalities. In this paper, we make the first attempt to address the zero-shot image manipulation task. We cast this problem as manipulating an input image according to a parametric model whose key parameters can be conditionally generated from any guiding signal (even unseen ones). To this end, we propose the Zero-shot Manipulation Net (ZM-Net), a fully-differentiable architecture that jointly optimizes an image-transformation network (TNet) and a parameter network (PNet). The PNet learns to generate key transformation parameters for the TNet given any guiding signal while the TNet performs fast zero-shot image manipulation according to both signal-dependent parameters from the PNet and signal-invariant parameters from the TNet itself. Extensive experiments show that our ZM-Net can perform high-quality image manipulation conditioned on different forms of guiding signals (e.g. style images and attributes) in real-time (tens of milliseconds per image) even for unseen signals. Moreover, a large-scale style dataset with over 20,000 style images is also constructed to promote further research."
]
} |
1708.00391 | 2741990823 | A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at 70 precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available. | Researchers have found several data sources from which to collect sentential paraphrases: multiple news agencies reporting the same event (MSRP) @cite_37 @cite_5 , multiple translated versions of a foreign novel @cite_28 @cite_53 or other texts @cite_42 , multiple definitions of the same concept @cite_22 , descriptions of the same video clip from multiple workers @cite_48 or rephrased sentences @cite_9 @cite_43 . However, all these data collection methods are incapable of obtaining sentential paraphrases on a large scale (i.e. limited number of news agencies or books with multiple translated versions), and or lack meaningful negative examples. Both of these properties are crucial for developing machine learning models that identify paraphrases and measure semantic similarities. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_28",
"@cite_48",
"@cite_53",
"@cite_42",
"@cite_9",
"@cite_43",
"@cite_5"
],
"mid": [
"1980776243",
"",
"2061235289",
"2164290393",
"2129468719",
"2169813772",
"2008127487",
"2511538013",
"131533222"
],
"abstract": [
"We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58 on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2 and 14.7 respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.",
"",
"We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-to-text rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments.",
"We address the text-to-text generation problem of sentence-level paraphrasing --- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"Automatic paraphrasing is an important component in many natural language processing tasks. In this article we present a new parallel corpus with paraphrase annotations. We adopt a definition of paraphrase based on word alignments and show that it yields high inter-annotator agreement. As Kappa is suited to nominal data, we employ an alternative agreement statistic which is appropriate for structured alignment tasks. We discuss how the corpus can be usefully employed in evaluating paraphrase systems automatically (e.g., by measuring precision, recall, and F1) and also in developing linguistically rich paraphrase models based on syntactic structure.",
"To paraphrase means to rewrite content while preserving the original meaning. Paraphrasing is important in fields such as text reuse in journalism, anonymizing work, and improving the quality of customer-written reviews. This article contributes to paraphrase acquisition and focuses on two aspects that are not addressed by current research: (1) acquisition via crowdsourcing, and (2) acquisition of passage-level samples. The challenge of the first aspect is automatic quality assurance; without such a means the crowdsourcing paradigm is not effective, and without crowdsourcing the creation of test corpora is unacceptably expensive for realistic order of magnitudes. The second aspect addresses the deficit that most of the previous work in generating and evaluating paraphrases has been conducted using sentence-level paraphrases or shorter; these short-sample analyses are limited in terms of application to plagiarism detection, for example. We present the Webis Crowd Paraphrase Corpus 2011 (Webis-CPC-11), which recently formed part of the PAN 2010 international plagiarism detection competition. This corpus comprises passage-level paraphrases with 4067 positive samples and 3792 negative samples that failed our criteria, using Amazon's Mechanical Turk for crowdsourcing. In this article, we review the lessons learned at PAN 2010, and explain in detail the method used to construct the corpus. The empirical contributions include machine learning experiments to explore if passage-level paraphrases can be identified in a two-class classification problem using paraphrase similarity features, and we find that a k-nearest-neighbor classifier can correctly distinguish between paraphrased and nonparaphrased samples with 0.980 precision at 0.523 recall. This result implies that just under half of our samples must be discarded (remaining 0.477 fraction), but our cost analysis shows that the automation we introduce results in a 18p financial saving and over 100 hours of time returned to the researchers when repeating a similar corpus design. On the other hand, when building an unrelated corpus requiring, say, 25p training data for the automated component, we show that the financial outcome is cost neutral, while still returning over 70 hours of time to the researchers. The work presented here is the first to join the paraphrasing and plagiarism communities.",
"We introduce a manually-created, multireference dataset for abstractive sentence and short paragraph compression. First, we examine the impact of singleand multi-sentence level editing operations on human compression quality as found in this corpus. We observe that substitution and rephrasing operations are more meaning preserving than other operations, and that compressing in context improves quality. Second, we systematically explore the correlations between automatic evaluation metrics and human judgments of meaning preservation and grammaticality in the compression task, and analyze the impact of the linguistic units used and precision versus recall measures on the quality of the metrics. Multi-reference evaluation metrics are shown to offer significant advantage over single reference-based metrics.",
"An obstacle to research in automatic paraphrase identification and generation is the lack of large-scale, publiclyavailable labeled corpora of sentential paraphrases. This paper describes the creation of the recently-released Microsoft Research Paraphrase Corpus, which contains 5801 sentence pairs, each hand-labeled with a binary judgment as to whether the pair constitutes a paraphrase. The corpus was created using heuristic extraction techniques in conjunction with an SVM-based classifier to select likely sentence-level paraphrases from a large corpus of topicclustered news data. These pairs were then submitted to human judges, who confirmed that 67 were in fact semantically equivalent. In addition to describing the corpus itself, we explore a number of issues that arose in defining guidelines for the human raters."
]
} |
1708.00391 | 2741990823 | A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at 70 precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available. | There are other phrasal and syntactic paraphrase data, such as DIRT @cite_52 , POLY @cite_0 , PATTY @cite_32 , DEFIE @cite_31 , and PPDB @cite_45 @cite_8 . Most of these works focus on news or web data. Other earlier works on Twitter paraphrase extraction used unsupervised approaches @cite_1 @cite_24 or small datasets @cite_10 @cite_16 . | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_32",
"@cite_52",
"@cite_0",
"@cite_24",
"@cite_45",
"@cite_31",
"@cite_16",
"@cite_10"
],
"mid": [
"2561311854",
"2116900724",
"1483236033",
"1965605789",
"2566302710",
"2250293945",
"",
"",
"1822046502",
"1498924130"
],
"abstract": [
"",
"We present a new and unique paraphrase resource, which contains meaningpreserving transformations between informal user-generated text. Sentential paraphrases are extracted from a comparable corpus of temporally and topically related messages on Twitter which often express semantically identical information through distinct surface forms. We demonstrate the utility of this new resource on the task of paraphrasing and normalizing noisy text, showing improvement over several state-of-the-art paraphrase and normalization systems 1 .",
"This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7 . PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75 . The PATTY resource is freely available for interactive access and download.",
"In this paper, we propose an unsupervised method for discovering inference rules from text, such as \"X is author of Y a X wrote Y\", \"X solved Y a X found a solution to Y\", and \"X caused Y a Y is triggered by X\". Inference rules are extremely important in many fields such as natural language processing, information retrieval, and artificial intelligence in general. Our algorithm is based on an extended version of Harris' Distributional Hypothesis, which states that words that occurred in the same contexts tend to be similar. Instead of using this hypothesis on words, we apply it to paths in the dependency trees of a parsed corpus.",
"",
"Compared to the edited genres that have played a central role in NLP research, microblog texts use a more informal register with nonstandard lexical items, abbreviations, and free orthographic variation. When confronted with such input, conventional text analysis tools often perform poorly. Normalization — replacing orthographically or lexically idiosyncratic forms with more standard variants — can improve performance. We propose a method for learning normalization rules from machine translations of a parallel corpus of microblog messages. To validate the utility of our approach, we evaluate extrinsically, showing that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.",
"",
"",
"We present an approach for automatically learning synonyms from a corpus of paraphrased tweets. The synonyms are learned by using shallow parse chunks to create candidate synonyms and their context windows, and the synonyms are substituted back into a paraphrase detection system that uses machine translation metrics as features for a classifier. We find a 2.29 improvement in F1 when we train and test on the paraphrase training set, demonstrating the importance of discovering high quality synonyms. We also find 9.8 better coverage of the paraphrase corpus using our synonyms rather than larger, existing synonym resources, demonstrating the power of extracting synonyms that are representative of the topics in the test set.",
"In the last few years, the interest of the research community in micro-blogs and social media services, such as Twitter, is growing exponentially. Yet, so far not much attention has been paid on a key characteristic of micro-blogs: the high level of information redundancy. The aim of this paper is to systematically approach this problem by providing an operational definition of redundancy. We cast redundancy in the framework of Textual Entailment Recognition. We also provide quantitative evidence on the pervasiveness of redundancy in Twitter, and describe a dataset of redundancy-annotated tweets. Finally, we present a general purpose system for identifying redundant tweets. An extensive quantitative evaluation shows that our system successfully solves the redundancy detection task, improving over baseline systems with statistical significance."
]
} |
1708.00300 | 2949981766 | The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade. This paper tackles a similar problem, but with different characteristics. It proposes a method for dynamic next best viewpoint recovery of a target point while avoiding possible occlusions. Since the environment can change, the method has to iteratively find the next best view with a global understanding of the free and occupied parts. We model the problem as a set of possible viewpoints which correspond to the centers of the facets of a virtual tessellated hemisphere covering the scene. Taking into account occlusions, distances between current and future viewpoints, quality of the viewpoint and joint constraints (robot arm joint distances or limits), we evaluate the next best viewpoint. The proposal has been evaluated on 8 different scenarios with different occlusions and a short 3D video sequence to validate its dynamic performance. | The problem of NBV is still a challenging problem to reduce the number of views needed to capture the whole object. @cite_11 uses a labeling of the object in levels of perception quality given by the angle between camera and the object and ray tracing techniques. With this information, mean-shift clustering is applied and the cluster with the highest value is chosen as the next best view. Other approaches use contours to calculate the unseen parts @cite_3 . Vasquez- @cite_0 used a two stage system that improves the quality of the modeling by predicting a next-best-view and evaluating a set of neighbor views, eventually selecting the best among all of them. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_11"
],
"mid": [
"2141847241",
"",
"2290291499"
],
"abstract": [
"Three-dimensional (3D) object reconstruction is the process of building a 3D model of a real object. This task is performed by taking several scans of an object from different locations (views). Due to the limited field of view of the sensor and the object's self-occlusions, it is a difficult problem to solve. In addition, sensor positioning by robots is not perfect, making the actual view different from the expected one. We propose a next best view (NBV) algorithm that determines each view to reconstruct an arbitrary object. Furthermore, we propose a method to deal with the uncertainty in sensor positioning. The algorithm fulfills all the constraints of a reconstruction process, such as new information, positioning constraints, sensing constraints and registration constraints. Moreover, it improves the scan's quality and reduces the navigation distance. The algorithm is based on a search-based paradigm where a set of candidate views is generated and then each candidate view is evaluated to determine whic...",
"",
"In this paper, we present a new method to determine the “next best view” (NBV) solution for accurate 3D reconstruction of an object with minimum prior information about the object's geometry. The proposed method determines the best visible surface of unknown objects using an adaptive mean shift algorithm which avoids the inaccessible position. The proposed method automatically generates the 3D model of objects in real time with a minimum number of best visible surface patches while the objects are moving on a turntable. By generating a set of potential next views, the proposed method ensures proper avoidance of unreachable positions. The number of views required to reconstruct a 3D model of objects depends upon their complexity. The proposed method is applicable to all kinds of range sensors and experimental results validate the proposed method for 3D modeling of real objects and prove its robustness."
]
} |
1708.00300 | 2949981766 | The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade. This paper tackles a similar problem, but with different characteristics. It proposes a method for dynamic next best viewpoint recovery of a target point while avoiding possible occlusions. Since the environment can change, the method has to iteratively find the next best view with a global understanding of the free and occupied parts. We model the problem as a set of possible viewpoints which correspond to the centers of the facets of a virtual tessellated hemisphere covering the scene. Taking into account occlusions, distances between current and future viewpoints, quality of the viewpoint and joint constraints (robot arm joint distances or limits), we evaluate the next best viewpoint. The proposal has been evaluated on 8 different scenarios with different occlusions and a short 3D video sequence to validate its dynamic performance. | Our problem includes dynamic elements in the scenario, so we have to deal with moving occlusions. @cite_2 use the information of the occluding element and assigns the next best view by maximizing the perception of the surface out of the occluded region. This solution is not applicable to our problem since our environment is dynamic so the occlusion changes over time. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2588625298"
],
"abstract": [
"How to determine the camera’s next best view is a challenging problem in vision field. A next best view approach is proposed based on occlusion information in a single depth image. First, the occlu..."
]
} |
1708.00300 | 2949981766 | The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade. This paper tackles a similar problem, but with different characteristics. It proposes a method for dynamic next best viewpoint recovery of a target point while avoiding possible occlusions. Since the environment can change, the method has to iteratively find the next best view with a global understanding of the free and occupied parts. We model the problem as a set of possible viewpoints which correspond to the centers of the facets of a virtual tessellated hemisphere covering the scene. Taking into account occlusions, distances between current and future viewpoints, quality of the viewpoint and joint constraints (robot arm joint distances or limits), we evaluate the next best viewpoint. The proposal has been evaluated on 8 different scenarios with different occlusions and a short 3D video sequence to validate its dynamic performance. | The dynamic visual tracking problem is a completely different problem to NBV. A camera mounted on a moving or stationary base is used to track an object. In @cite_6 the authors defined visual tracking in 2D of a single feature point as the translation @math and rotation @math with respect to the camera frame that keeps @math stationary, where @math is the area in the image plane where the target is projected. Many authors e.g. @cite_6 @cite_8 have addressed this problem using different approaches. | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2125858574",
"1551781742"
],
"abstract": [
"The authors present algorithms for robotic (eye-in-hand configuration) real-time visual tracking of arbitrary 3D objects traveling at unknown velocities in a 2D space (depth is given as known). Visual tracking is formulated as a problem of combining control with computer vision. A mathematical formulation of the control problem that includes information from a novel feedback vision sensor and represents everything with respect to the camera frame is presented. The sum-of-squared differences (SSD) optical flow is used to compute the vector of discrete displacements each instant of time. These displacements can be fed either directly to a PI (proportional-integral) controller or to a pole assignment controller or discrete steady-state Kalman filter. In the latter case, the Kalman filter calculates the estimated values of the system's states and the exogenous disturbances, and a discrete LQG (linear-quadratic Gaussian) controller computes the desired motion of the robotic system. The outputs of the controllers are sent to the Cartesian robotic controller. Performance results are presented. >",
"Abstract : This work represents a more general approach to robotic system design than one based on predefined responses in a controlled environment. An implementing of a vision-based robotic tracking system is presented in which target trajectory predictions enable the robot to track and intercept a moving target. A host microcomputer receives target position information from a vision module, predicts the target's trajectory, and issues tracking commands to the robot controlled. Five predictive algorithms are derived for implementation in the system, including a Kalman and an augmented Kalman filter. The use of one-step as well as absolute and relative n-step predictions is investigated. The best predictor algorithm is presented, by which one of the five predictions is selected to be used as the robotic tracking command. Using data from experimental trials, predictor results are compared and robotic tracking performance and interception success are evaluated for the target both moving and after it comes to rest. Constraints limiting the applicability of this implementation are discussed and possible improvements and extensions suggested. (Author)"
]
} |
1708.00300 | 2949981766 | The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade. This paper tackles a similar problem, but with different characteristics. It proposes a method for dynamic next best viewpoint recovery of a target point while avoiding possible occlusions. Since the environment can change, the method has to iteratively find the next best view with a global understanding of the free and occupied parts. We model the problem as a set of possible viewpoints which correspond to the centers of the facets of a virtual tessellated hemisphere covering the scene. Taking into account occlusions, distances between current and future viewpoints, quality of the viewpoint and joint constraints (robot arm joint distances or limits), we evaluate the next best viewpoint. The proposal has been evaluated on 8 different scenarios with different occlusions and a short 3D video sequence to validate its dynamic performance. | @cite_6 combined visual tracking and control. In particular they use sum of square differences of the optical flow to compute the discrete displacement which is fed to either a PI pole assigned controller or a Kalman filter to improve the state estimates. These estimates are then used to move the robot arm. According to @cite_6 , @cite_8 predicted mathematically the position of the object's centroid, in order to visually track the object. Their algorithm could only work for slowly moving objects, since it had to compute the coordinates of the centroid. | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2125858574",
"1551781742"
],
"abstract": [
"The authors present algorithms for robotic (eye-in-hand configuration) real-time visual tracking of arbitrary 3D objects traveling at unknown velocities in a 2D space (depth is given as known). Visual tracking is formulated as a problem of combining control with computer vision. A mathematical formulation of the control problem that includes information from a novel feedback vision sensor and represents everything with respect to the camera frame is presented. The sum-of-squared differences (SSD) optical flow is used to compute the vector of discrete displacements each instant of time. These displacements can be fed either directly to a PI (proportional-integral) controller or to a pole assignment controller or discrete steady-state Kalman filter. In the latter case, the Kalman filter calculates the estimated values of the system's states and the exogenous disturbances, and a discrete LQG (linear-quadratic Gaussian) controller computes the desired motion of the robotic system. The outputs of the controllers are sent to the Cartesian robotic controller. Performance results are presented. >",
"Abstract : This work represents a more general approach to robotic system design than one based on predefined responses in a controlled environment. An implementing of a vision-based robotic tracking system is presented in which target trajectory predictions enable the robot to track and intercept a moving target. A host microcomputer receives target position information from a vision module, predicts the target's trajectory, and issues tracking commands to the robot controlled. Five predictive algorithms are derived for implementation in the system, including a Kalman and an augmented Kalman filter. The use of one-step as well as absolute and relative n-step predictions is investigated. The best predictor algorithm is presented, by which one of the five predictions is selected to be used as the robotic tracking command. Using data from experimental trials, predictor results are compared and robotic tracking performance and interception success are evaluated for the target both moving and after it comes to rest. Constraints limiting the applicability of this implementation are discussed and possible improvements and extensions suggested. (Author)"
]
} |
1708.00300 | 2949981766 | The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade. This paper tackles a similar problem, but with different characteristics. It proposes a method for dynamic next best viewpoint recovery of a target point while avoiding possible occlusions. Since the environment can change, the method has to iteratively find the next best view with a global understanding of the free and occupied parts. We model the problem as a set of possible viewpoints which correspond to the centers of the facets of a virtual tessellated hemisphere covering the scene. Taking into account occlusions, distances between current and future viewpoints, quality of the viewpoint and joint constraints (robot arm joint distances or limits), we evaluate the next best viewpoint. The proposal has been evaluated on 8 different scenarios with different occlusions and a short 3D video sequence to validate its dynamic performance. | In our case, we are not focused on tracking the occlusions themselves. Our goal is to track the free views. Some authors have studied planning methods to achieve best camera positions, also called camera planning. @cite_13 presented a proposal with a recursive path planner to perform NBV. However, the environment does not change in their case, whereas we have to re-estimate the new position in terms of new occlusions, distance between current and future point and viewpoint quality. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2133591555"
],
"abstract": [
"We propose an approach for acquiring geometric 3D models using cameras mounted on autonomous vehicles and robots. Our method uses structure from motion techniques from computer vision to obtain the geometric structure of the scene. To achieve an efficient goal-driven resource deployment, we develop an incremental approach, which alternates between an accuracy-driven next best view determination and recursive path planning. The next best view is determined by a novel cost function that quantifies the expected contribution of future viewing configurations. A sensing path for robot motion towards the next best view is then achieved by a cost-driven recursive search of intermediate viewing configurations. We discuss some of the properties of our view cost function in the context of an iterative view planning process and present experimental results on a synthetic environment."
]
} |
1708.00335 | 2740333062 | People learn whenever and wherever possible, and whatever they like or encounter--Mathematics, Drama, Art, Languages, Physics, Philosophy, and so on. With the bursting of knowledge, evaluation of one's understanding of conceptual knowledge becomes increasingly difficult. There are a lot of demands for evaluating one's understanding of a piece of knowledge, e.g., facilitating personalized recommendations; discovering one's expertises and deficiencies in a field; recommending topics for a conversation between people with different educational or cultural backgrounds in their first encounter; recommending a learning material to practice a meaningful learning etc. Assessment of understanding of knowledge is conventionally practiced through tests or interviews, but they have some limitations such as low-efficiency and not-comprehensive. We propose a method to estimate one's understanding of conceptual knowledge, by keeping track of his her learning activities. It overcomes some limitations of traditional methods, hence complements traditional methods. | Many research fields focus on the collection of personal information, such as lifelogging, expertise finding, and personal informatics. Bush envisioned the memex' system, in which individuals could compress and store personally experienced information, such as books, records, and communications @cite_55 . Inspired by memex', developed a project called MyLifeBits to store all of a person's digital media, including documents, images, audio, and video @cite_36 . In @cite_30 , a person's reading history about an electronic document is used as attributes for re-finding the document. ICKEM is similar to memex' and MyLifeBits in that it records an individual's digital history, although for a different purpose. Memex' and MyLifeBits are mainly for re-finding or reviewing personal data; ICKEM is for quantitatively evaluating a person's knowledge. | {
"cite_N": [
"@cite_36",
"@cite_55",
"@cite_30"
],
"mid": [
"",
"2015720094",
"2065602119"
],
"abstract": [
"",
"As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on \"The American Scholar,\" this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.",
"This paper addresses the problem of identifying local experts in social media systems like Twitter. Local experts -- in contrast to general topic experts -- have specialized knowledge focused around a particular location, and are important for many applications including answering local information needs and interacting with community experts. And yet identifying these experts is difficult. Hence in this paper, we propose a geo-spatial-driven approach for identifying local experts that leverages the fine-grained GPS coordinates of millions of Twitter users. We propose a local expertise framework that integrates both users' topical expertise and their local authority. Concretely, we estimate a user's local authority via a novel spatial proximity expertise approach that leverages over 15 million geo-tagged Twitter lists. We estimate a user's topical expertise based on expertise propagation over 600 million geo-tagged social connections on Twitter. We evaluate the proposed approach across 56 queries coupled with over 11,000 individual judgments from Amazon Mechanical Turk. We find significant improvement over both general (non-local) expert approaches and comparable local expert finding approaches."
]
} |
1708.00335 | 2740333062 | People learn whenever and wherever possible, and whatever they like or encounter--Mathematics, Drama, Art, Languages, Physics, Philosophy, and so on. With the bursting of knowledge, evaluation of one's understanding of conceptual knowledge becomes increasingly difficult. There are a lot of demands for evaluating one's understanding of a piece of knowledge, e.g., facilitating personalized recommendations; discovering one's expertises and deficiencies in a field; recommending topics for a conversation between people with different educational or cultural backgrounds in their first encounter; recommending a learning material to practice a meaningful learning etc. Assessment of understanding of knowledge is conventionally practiced through tests or interviews, but they have some limitations such as low-efficiency and not-comprehensive. We propose a method to estimate one's understanding of conceptual knowledge, by keeping track of his her learning activities. It overcomes some limitations of traditional methods, hence complements traditional methods. | Personal informatics is a class of tools that help people collect personally relevant information for the purpose of self-reflection and gaining self-knowledge @cite_4 @cite_51 @cite_11 . Various tools have been developed to help people collect and analyze different kinds of personal information, such as location @cite_9 , finances @cite_13 , food @cite_6 , weight @cite_8 @cite_45 , and physical activity @cite_0 . ICKEM facilitates a new type of personal informatics tool that helps people discover their expertise and deficiencies in a more accurate way, by quantitatively assessing an individual's understanding of knowledge. | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_45",
"@cite_51",
"@cite_11"
],
"mid": [
"2061432818",
"2143512323",
"2153772840",
"2099489485",
"2103239268",
"",
"1992532072",
"107980871",
""
],
"abstract": [
"How do people keep track of their money? In this paper we present a preliminary scoping study of how 14 individuals in the San Francisco Bay Area earn, save, spend and understand money and their personal and family finances. We describe the practices we developed for exploring the sensitive topic of money, and then discuss three sets of findings. The first is the emotional component of the relationship people have with their finances. Second, we discuss the tools and processes people used to keep track of their financial situation. Finally we discuss how people account for the unknown and unpredictable nature of the future through their financial decisions. We conclude by discussing the future of studies of money and finance in HCI, and reflect on the opportunities for improving tools to aid people in managing and planning their finances.",
"People strive to obtain self-knowledge. A class of systems called personal informatics is appearing that help people collect and reflect on personal information. However, there is no comprehensive list of problems that users experience using these systems, and no guidance for making these systems more effective. To address this, we conducted surveys and interviews with people who collect and reflect on personal information. We derived a stage-based model of personal informatics systems composed of five stages (preparation, collection, integration, reflection, and action) and identified barriers in each of the stages. These stages have four essential properties: barriers cascade to later stages; they are iterative; they are user-driven and or system-driven; and they are uni-faceted or multi-faceted. From these properties, we recommend that personal informatics systems should 1) be designed in a holistic manner across the stages; 2) allow iteration between stages; 3) apply an appropriate balance of automated technology and user control within each stage to facilitate the user experience; and 4) explore support for associating multiple facets of people's lives to enrich the value of systems.",
"The weight scale is perhaps the most ubiquitous health sensor of all and is important to many health and lifestyle decisions, but its fundamental interface--a single numerical estimate of a person's current weight--has remained largely unchanged for 100 years. An opportunity exists to impact public health by re-considering this pervasive interface. Toward that end, we investigated the correspondence between consumers' perceptions of weight data and the realities of weight fluctuation. Through an analysis of online product reviews, a journaling study on weight fluctuations, expert interviews, and a large-scale survey of scale users, we found that consumers' perception of weight scale behavior is often disconnected from scales' capabilities and from clinical relevance, and that accurate understanding of weight fluctuation is associated with greater trust in the scale itself. We propose significant changes to how weight data should be presented and discuss broader implications for the design of other ubiquitous health sensing devices.",
"There have been many location sharing systems developed over the past two decades, and only recently have they started to be adopted by consumers. In this paper, we present the results of three studies focusing on the foursquare check-in system. We conducted interviews and two surveys to understand, both qualitatively and quantitatively, how and why people use location sharing applications, as well as how they manage their privacy. We also document surprising uses of foursquare, and discuss implications for design of mobile social services.",
"Although food journaling is understood to be both important and difficult, little work has empirically documented the specific challenges people experience with food journals. We identify key challenges in a qualitative study combining a survey of 141 current and lapsed food journalers with analysis of 5,526 posts in community forums for three mobile food journals. Analyzing themes in this data, we find and discuss barriers to reliable food entry, negative nudges caused by current techniques, and challenges with social features. Our results motivate research exploring a wider range of approaches to food journal design and technology.",
"",
"The problems of obesity and overweight are commonly cited as the motivation behind recent efforts to develop technology that promotes physical activity. Prompted by the social nature of many of the emerging applications, this paper presents our investigation of the sociality of weight management as experienced by a broad demographic of individuals. Our findings highlight the broad scope of peer involvement, and provide insight into the context and mechanics of related interaction that may prove valuable in informing the next generation of peer-based weight management technology for use in everyday life.",
"",
""
]
} |
1708.00335 | 2740333062 | People learn whenever and wherever possible, and whatever they like or encounter--Mathematics, Drama, Art, Languages, Physics, Philosophy, and so on. With the bursting of knowledge, evaluation of one's understanding of conceptual knowledge becomes increasingly difficult. There are a lot of demands for evaluating one's understanding of a piece of knowledge, e.g., facilitating personalized recommendations; discovering one's expertises and deficiencies in a field; recommending topics for a conversation between people with different educational or cultural backgrounds in their first encounter; recommending a learning material to practice a meaningful learning etc. Assessment of understanding of knowledge is conventionally practiced through tests or interviews, but they have some limitations such as low-efficiency and not-comprehensive. We propose a method to estimate one's understanding of conceptual knowledge, by keeping track of his her learning activities. It overcomes some limitations of traditional methods, hence complements traditional methods. | Expertise is one's expert skill or knowledge in a particular field. Expertise finding is the use of tools for finding and assessing individual expertise @cite_28 @cite_37 @cite_29 . As an important link of knowledge sharing, expertise finding has been heavily studied in many research communities @cite_17 @cite_41 @cite_40 @cite_20 @cite_52 @cite_54 . Many sources of data have been exploited to assess an individual's expertise, such as one's publications, documents, emails, web search behavior, other people's recommendations, social media etc. ICKEM provides a new source of data to analyze one's expertise--one's learning history about a topic, which is more comprehensive and straightforward than other data sources. Because one's expertise is mainly obtained through learning (Including Informal Learning", which occurs through the experience of day-to-day situations, such as a casual conversation, play, exploring, etc.) | {
"cite_N": [
"@cite_37",
"@cite_28",
"@cite_41",
"@cite_29",
"@cite_54",
"@cite_52",
"@cite_40",
"@cite_20",
"@cite_17"
],
"mid": [
"1480112030",
"",
"",
"2133665779",
"2132255457",
"2091425614",
"2026917949",
"2022322548",
"1993404314"
],
"abstract": [
"In this paper we describe two systems designed to connect users to distributed, continuously changing experts and their knowledge. Using information retrieval, information extraction, and collaborative filtering techniques, these systems are able to enhance corporate knowledge management by overcoming traditional problems of knowledge acquisition and maintenance and associated (human and financial) costs. We describe the purpose of these two systems, how they work, and current deployment in a global corporate environment to enable end users to directly discover experts and their knowledge.",
"",
"",
"The problem of finding someone who might be able to help with a particular task or knowledge area exists everywhere, be it in groups of students or corporate settings. Time and effort are spent looking for relevant information when another person in the community could easily provide assistance. We have chosen to tackle this problem. Our approach to addressing the problem of finding people who can help is to use software agents to assist the search for expertise and mediate the information exchange process. Issues of availability, user profiling, privacy and incentives are involved. We chose the Java Programming domain for initial implementation and testing of the system. Other researchers have taken stabs at this problem, most without the use of agent technology. We are building agents, called Expert Finders, to help people find help.",
"The rising popularity of social media in the enterprise presents new opportunities for one of the organization's most important needs--expertise location. Social media data can be very useful for expertise mining due to the variety of existing applications, the rich metadata, and the diversity of user associations with content. In this work, we provide an extensive study that explores the use of social media to infer expertise within a large global organization. We examine eight different social media applications by evaluating the data they produce through a large user survey, with 670 enterprise social media users. We distinguish between two semantics that relate a user to a topic: expertise in the topic and interest in it and compare these two semantics across the different social media applications.",
"Finding experts in question answering platforms has important applications, such as question routing or identification of best answers. Addressing the problem of ranking users with respect to their expertise, we propose Competition-Based Expertise Networks (CBEN), a novel community expertise network structure based on the principle of competition among the answerers of a question. We evaluate our approach on a very large dataset from Yahoo! Answers using a variety of centrality measures. We show that it outperforms state-of-the-art network structures and, unlike previous methods, is able to consistly outperform simple metrics like best answer count. We also analyse question answering forums in Yahoo! Answers, and show that they can be characterised by factual or subjective information seeking behavior, social discussions and the conducting of polls or surveys. We find that the ability to identify experts greatly depends on the type of forum, which is directly reflected in the structural properties of the expertise networks.",
"This article describes automated tools for increasing organizational awareness within a global enterprise. The MITRE Corporation is the context for this work; however, the tools and techniques are general and should apply to a wide variety of distributed, heterogeneous organizations. These tools provide awareness of team members and materials in virtual collaboration environments as well as support for automated discovery of distributed experts. The results are embodied in 3 systems: MITRE's Collaborative Virtual Workspace (CVW), Expert Finder, and XpertNet. CVW is a place-based collaboration environment that enables team members to find one another and work together. Expert Finder is an expert skill finder that exploits the intellectual products created within an organization to support automated expertise identification. XpertNet addresses the problem of detecting extant or emerging classes of expertise without a priori knowledge of their existence. Both Expert Finder and XpertNet combine to detect and ...",
"This paper addresses several key issues in the ArnetMiner system, which aims at extracting and mining academic social networks. Specifically, the system focuses on: 1) Extracting researcher profiles automatically from the Web; 2) Integrating the publication data into the network from existing digital libraries; 3) Modeling the entire academic network; and 4) Providing search services for the academic network. So far, 448,470 researcher profiles have been extracted using a unified tagging approach. We integrate publications from online Web databases and propose a probabilistic framework to deal with the name ambiguity problem. Furthermore, we propose a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues. Search services such as expertise search and people association search have been provided based on the modeling results. In this paper, we describe the architecture and main features of the system. We also present the empirical evaluation of the proposed methods.",
"Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers' viewpoints on what has become called knowledge sharing' and expertise sharing'. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, from an analytical standpoint, roughly represent generations' of research: an object-centric' and a people-centric' view. We also survey the challenges and opportunities ahead."
]
} |
1707.09870 | 2739789140 | Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network. | Low bits quantization of neural network The research of low bits quantization of neural network can be traced back to 1990s @cite_7 @cite_30 . Most of the benefits of low bits quantization, such as memory efficiency and multiplication free, had already been explored in these papers. However, the networks are shallow at that age so these approaches do not verify their validity in deep networks and large scale datasets. | {
"cite_N": [
"@cite_30",
"@cite_7"
],
"mid": [
"1992348535",
"2046542508"
],
"abstract": [
"Multilayer perceptrons (MLPs) with weight values restricted to powers of two or sums of powers of two are introduced. In a digital implementation, these neural networks do not need multipliers but only shift registers when computing in forward mode, thus saving chip area and computation time. A learning procedure, based on backpropagation, is presented for such neural networks. This learning procedure requires full real arithmetic and therefore must be performed offline. Some test cases are presented, concerning MLPs with hidden layers of different sizes, on pattern recognition problems. Such tests demonstrate the validity and the generalization capability of the method and give some insight into the behavior of the learning algorithm. >",
"Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP 24 super computer."
]
} |
1707.09870 | 2739789140 | Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network. | In recent years, with the explosion of deep learning in various tasks, low bits quantization techniques have been revisited. Some early works quantize the pretrained weights with 4-12 bits and find such approximations do not decrease predictive performance @cite_34 @cite_10 @cite_9 @cite_17 . More recent works focus on training extremely low bits network from scratch with binary or ternary weights. Among these works, BinaryConnect @cite_19 is the most representative one. BinaryConnect directly optimizes the loss of the network with weights @math replaced by @math . In order to avoid the zero-gradient problem of sign function, the authors approximate it with the hard tanh" function in the backward process. This simple idea inspired many following works. BinaryConnect only achieves good results on simple datasets such as MNIST, CIFAR10 and SVHN, but suffers a large degradation on challenging datasets like ImageNet. | {
"cite_N": [
"@cite_9",
"@cite_19",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"2177838837",
"2963114950",
"2469490737",
"2198190323",
"2952936791"
],
"abstract": [
"The creation of practical deep learning data-products often requires parallelization across processors and computers to make deep learning feasible on large data sets, but bottlenecks in communication bandwidth make it difficult to attain good speedups through parallelism. Here we develop and test 8-bit approximation algorithms which make better use of the available bandwidth by compressing 32-bit gradients and nonlinear activations to 8-bit approximations. We show that these approximations do not decrease predictive performance on MNIST, CIFAR10, and ImageNet for both model and data parallelism and provide a data transfer speedup of 2x relative to 32-bit parallelism. We build a predictive model for speedups based on our experimental data, verify its validity on known speedup data, and show that we can obtain a speedup of 50x and more on a system of 96 GPUs compared to a speedup of 23x for 32-bit. We compare our data types with other methods and show that 8-bit approximations achieve state-of-the-art speedups for model parallelism. Thus 8-bit approximation is an efficient method to parallelize convolutional networks on very large systems of GPUs.",
"Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward backward passes can now operate on low bitwidth weights and activations gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1 top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.",
"For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.",
"In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bit-width settings, the fixed point DCNs with optimized bit width allocation offer >20 reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78 error-rate on CIFAR-10 benchmark."
]
} |
1707.09870 | 2739789140 | Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network. | Many efforts have been devoted to improve the performance of BinaryConnect. For example, Binary Weight Network (BWN) @cite_25 proposes to improve the performance of BinaryConnect with a better approximation by introducing scale factors for the weights during binarization. Ternary Weight Network (TWN) @cite_23 extends the idea of BWN to network with ternary weights and achieves a better performance. Inspired by BinaryConnect, in order to avoid the zero-gradient problem, both BWN and TWN modify the backward process by applying the gradients of the loss at the quantized weights. | {
"cite_N": [
"@cite_25",
"@cite_23"
],
"mid": [
"2588254292",
"2405920868"
],
"abstract": [
"We consider the problem of handling binary constraints in optimization problems. We review methods for solving binary quadratic programs (BQPs), such as the spectral method and semidefinite programming relaxations. We then discuss two new methods for handling these constraints. The first involves the introduction of an extra unconstrained variable along with the alternating direction method of multipliers (ADMM), which has now also appeared in other recent literature. This allows the effect of the binary variables to be decoupled when they are minimized over. The second involves rewarding the one-norm while restricting the infinity-norm, based on a reformulation of the original problem. The piecewise linearity of the negative penalty results in the problem being convex until it hits a critical point, at which point the parameters of this linear term can be changed. These two methods can be applied to any problems which are convex except for binary constraints. In addition to testing them on BQPs, we show the efficacy of these approaches on point segmentation and image segmentation problems.",
"We introduce ternary weight networks (TWNs) - neural networks with weights constrained to +1, 0 and -1. The Euclidian distance between full (float or double) precision weights and the ternary weights along with a scaling factor is minimized. Besides, a threshold-based ternary function is optimized to get an approximated solution which can be fast and easily computed. TWNs have stronger expressive abilities than the recently proposed binary precision counterparts and are thus more effective than the latter. Meanwhile, TWNs achieve up to 16 @math or 32 @math model compression rate and need fewer multiplications compared with the full precision counterparts. Benchmarks on MNIST, CIFAR-10, and large scale ImageNet datasets show that the performance of TWNs is only slightly worse than the full precision counterparts but outperforms the analogous binary precision counterparts a lot."
]
} |
1707.09870 | 2739789140 | Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network. | ADMM and its nonconvex extension Alternating Direction Method of Multipliers (ADMM) @cite_27 is an algorithm that is intended to blend the decomposability of dual ascent with the superior convergence properties of the method of multipliers. The algorithm solves problems in the form: with variables @math and @math , where @math , @math and @math . | {
"cite_N": [
"@cite_27"
],
"mid": [
"2164278908"
],
"abstract": [
"Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations."
]
} |
1707.09939 | 2740728105 | In this paper, we provide a systematic analysis of the Twitter discussion on the 2016 Austrian presidential elections. In particular, we extracted and analyzed a data-set consisting of 343645 Twitter messages related to the 2016 Austrian presidential elections. Our analysis combines methods from network science, sentiment analysis, as well as bot detection. Among other things, we found that: a) the winner of the election (Alexander Van der Bellen) was considerably more popular and influential on Twitter than his opponent, b) the Twitter followers of Van der Bellen substantially participated in the spread of misinformation about him, c) there was a clear polarization in terms of the sentiments spread by Twitter followers of the two presidential candidates, d) the in-degree and out-degree distributions of the underlying communication network are heavy-tailed, and e) compared to other recent events, such as the 2016 Brexit referendum or the 2016 US presidential elections, only a very small number of bots participated in the Twitter discussion on the 2016 Austrian presidential election. | In @cite_64 , Stieglitz and Dang-Xuan applied the SentiStrength algorithm for sentiment analysis to study the spread of tweets during the German elections in 2012. In particular, they quantified the impact of positive and negative tweets in terms of the re-tweet count and the speed of re-tweeting. In @cite_52 , Diaz- studied the public opinion about the presidents of 18 Latin American countries by applying sentiment analysis techniques to Spanish language tweets and short blog posts. To determine how people feel about each president, the authors carried out a part-of-speech tagging to extract the list of nouns and adjectives which they later mapped to a corresponding emotion score in the NRC emotion lexicon. Additional non-English language studies have been conducted for the Nigerian presidential elections 2011 @cite_80 , Indonesian presidential elections @cite_50 , as well as the Bulgarian presidential elections @cite_9 . | {
"cite_N": [
"@cite_64",
"@cite_9",
"@cite_52",
"@cite_50",
"@cite_80"
],
"mid": [
"2072322795",
"2184009014",
"2033008743",
"2034345962",
"2060968456"
],
"abstract": [
"As a new communication paradigm, social media has promoted information dissemination in social networks. Previous research has identified several content-related features as well as user and network characteristics that may drive information diffusion. However, little research has focused on the relationship between emotions and information diffusion in a social media setting. In this paper, we examine whether sentiment occurring in social media content is associated with a user's information sharing behavior. We carry out our research in the context of political communication on Twitter. Based on two data sets of more than 165,000 tweets in total, we find that emotionally charged Twitter messages tend to be retweeted more often and more quickly compared to neutral ones. As a practical implication, companies should pay more attention to the analysis of sentiment related to their brands and products in social media communication as well as in designing advertising content that triggers emotions.",
"We present a generic approach to real-time monitoring of the Twitter sentiment and show its application to the Bulgarian parliamentary elections in May 2013. Our approach is based on building high quality sentiment classification models from manually annotated tweets. In particular, we have developed a user-friendly annotation platform, a feature selection procedure based on maximizing prediction accuracy, and a binary SVM classifier extended with a neutral zone. We have also considerably improved the language detection in tweets. The evaluation results show that before and after the Bulgarian elections, negative sentiment about political parties prevailed. Both, the volume and the difference between the negative and positive tweets for individual parties closely match the election results. The later result is somehow surprising, but consistent with the prevailing negative sentiment during the elections.",
"Social media services have become increasingly popular and their penetration is worldwide. Micro-blogging services, such as Twitter, allow users to express themselves, share their emotions and discuss their daily life affairs in real-time, covering a variety of different points of view and opinions, including political and event-related topics such as immigration, economic issues, tax policy or election campaigns. On the other hand, traditional methods tracking public opinion still heavily rely upon opinion polls, which are usually limited to small sample sizes and can incur in significant costs in terms of time and money. In this paper, we leverage state-of-the-art techniques of sentiment analysis for real-time political emotion tracking. In particular, we analyze mentions of personal names of 18 presidents in Latin America, and measure each political figure's effect in the emotions reflected on the social web.",
"the purpose of this research is to find the opinion on Twitter about the 2014 president candidates and find the correlation between the opinion on Twitter and on digital newspaper. To perform this, tweets are extracted. Some tweets will be labelled president candidates name and the positive and negative sentiment for the training set. A training will be conducted to test whether the training set is enough to perform classification or not. The next step is to calculate the sentiment results and compare to the results from digital newspaper by using a web-based application called Tirto. Deep analysing conducted to analyse the relation between the issues on Twitter and on digital newspaper.",
"This paper analyzes a corpus of Nigerian Tweets collected during the run-up to the 2011 Nigerian Presidential election and compares it with official election returns and polling data. We found that counts of the mentions on Twitter of the two major candidates correlated strongly with polling and election results when compared across the country's geopolitical regions, though the same data over represented two other candidates who did not fare as well in the polls or at the ballot box. Sentiment extracted from Twitter was less accurate in capturing mean levels of support for the two major candidates and, in particular, showed a strong negativity bias against the incumbent president. Twitter sentiment did mirror regional trends in the polling and election results though not as strongly as seen for mention counts. We demonstrate methodologically how to sample these data, extract sentiment, and compare this sentiment with ground-truth polling data and election results. Although social media clearly capture opinion about contentious issues, our results suggest that the opinion represented there may not always accurately reflect true public opinion."
]
} |
1707.09939 | 2740728105 | In this paper, we provide a systematic analysis of the Twitter discussion on the 2016 Austrian presidential elections. In particular, we extracted and analyzed a data-set consisting of 343645 Twitter messages related to the 2016 Austrian presidential elections. Our analysis combines methods from network science, sentiment analysis, as well as bot detection. Among other things, we found that: a) the winner of the election (Alexander Van der Bellen) was considerably more popular and influential on Twitter than his opponent, b) the Twitter followers of Van der Bellen substantially participated in the spread of misinformation about him, c) there was a clear polarization in terms of the sentiments spread by Twitter followers of the two presidential candidates, d) the in-degree and out-degree distributions of the underlying communication network are heavy-tailed, and e) compared to other recent events, such as the 2016 Brexit referendum or the 2016 US presidential elections, only a very small number of bots participated in the Twitter discussion on the 2016 Austrian presidential election. | Some studies combine sentiment analysis and social network analysis. For example, in @cite_14 study jihadists' radicalization over social networks. They took a lexicon-based approach to identify sentiment polarities in YouTube comments and combined it with the network aspects of information sharing. In particular, they applied betweenness centrality to identify influential users in the YouTube-sphere, analyzed the network density, and determined the average communication speed. In another study @cite_19 examined the differences in opinions among Twitter communities by reconstructing a follower-followee network of over 60 Twitter channels and assigning sentiment polarity scores to each vertex (user). | {
"cite_N": [
"@cite_19",
"@cite_14"
],
"mid": [
"2574442347",
"2124296369"
],
"abstract": [
"Twitter is a platform which may contain opinions, thoughts, facts and other information. Within it, many and various communities are originated by users with common interests, or with similar ways to feel part of the community. This paper presents a possible combined approach between Social Network Analysis and Sentiment Analysis. In particular, we have tried to associate a sentiment to the nodes of the graphs showing the social connections, and this may highlight the potential correlations. The idea behind it is that, on the one hand, the network topology can contextualize and then, in part, unmask some incorrect results of the Sentiment Analysis; on the other hand, the polarity of the feeling on the network can highlight the role of semantic connections in the hierarchy of the communities that are present in the network. In this work, we illustrate the approach to the issue, together with the system architecture and, then, we discuss our first results.",
"The increased online presence of jihadists has raised the possibility of individuals being radicalised via the Internet. To date, the study of violent radicalisation has focused on dedicated jihadist websites and forums. This may not be the ideal starting point for such research, as participants in these venues may be described as \"already made-up minds\". Crawling a global social networking platform, such as YouTube, on the other hand, has the potential to unearth content and interaction aimed at radicalisation of those with little or no apparent prior interest in violent jihadism. This research explores whether such an approach is indeed fruitful. We collected a large dataset from a group within YouTube that we identified as potentially having a radicalising agenda. We analysed this data using social network analysis and sentiment analysis tools, examining the topics discussed and what the sentiment polarity (positive or negative) is towards these topics. In particular, we focus on gender differences in this group of users, suggesting most extreme and less tolerant views among female users."
]
} |
1707.09939 | 2740728105 | In this paper, we provide a systematic analysis of the Twitter discussion on the 2016 Austrian presidential elections. In particular, we extracted and analyzed a data-set consisting of 343645 Twitter messages related to the 2016 Austrian presidential elections. Our analysis combines methods from network science, sentiment analysis, as well as bot detection. Among other things, we found that: a) the winner of the election (Alexander Van der Bellen) was considerably more popular and influential on Twitter than his opponent, b) the Twitter followers of Van der Bellen substantially participated in the spread of misinformation about him, c) there was a clear polarization in terms of the sentiments spread by Twitter followers of the two presidential candidates, d) the in-degree and out-degree distributions of the underlying communication network are heavy-tailed, and e) compared to other recent events, such as the 2016 Brexit referendum or the 2016 US presidential elections, only a very small number of bots participated in the Twitter discussion on the 2016 Austrian presidential election. | Some studies merely focused on the application of network analysis methods. For example, in @cite_70 Burgees and Bruns collected tweets about the 2010 Australian elections containing the hashtag. In particular, they investigated the topics people tweeted about and reconstructed a network of replies. The authors distinguished between a passive (broadcast only) and an interactive user behavior, and identified important users in the network by applying the betweenness centrality measure. In @cite_32 , applied a latent Dirichlet allocation over a set of tweets to identify a list of topics discussed during the 2012 Korean presidential elections. They examined the occurrences of each topic within a time period and categorized them as a rising (trending) or a falling topic. In addition, they studied the topics related to each presidential candidate and constructed a network of term co-occurrences. | {
"cite_N": [
"@cite_70",
"@cite_32"
],
"mid": [
"1542020161",
"1970576589"
],
"abstract": [
"This paper draws on a larger study of the uses of Australian user-created content and online social networks to examine the relationships between professional journalists and highly engaged Australian users of political media within the wider media ecology, with a particular focus on Twitter. It uses an analysis of topic-based conversation networks using the #ausvotes hashtag on Twitter around the 2010 federal election to explore the key themes and issues addressed by this Twitter community during the campaign, and finds that Twitter users were largely commenting on the performance of mainstream media and politicians rather than engaging in direct political discussion. The often critical attitude of Twitter users towards the political establishment mirrors the approach of news and political bloggers to political actors, nearly a decade earlier, but the increasing adoption of Twitter as a communication tool by politicians, journalists, and everyday users alike makes a repetition of the polarisation experie...",
"Social media is changing existing information behavior by giving users access to real-time online information channels without the constraints of time and space. Social media, therefore, has created an enormous data analysis challenge for scientists trying to keep pace with developments in their field. Most previous studies have adopted broad-brush approaches that typically result in limited analysis possibilities. To address this problem, we applied text-mining techniques to Twitter data related to the 2012 Korean presidential election. We use three primary techniques: topic modeling to track changes in topical trends, mention-direction-based user network analysis, and term co-occurrence retrieval for further content analysis. Our study reveals that Twitter could be a useful way to detect and trace the advent of and changes in social issues, while analyzing mention-based user networks could show different aspects of user behaviors."
]
} |
1707.09939 | 2740728105 | In this paper, we provide a systematic analysis of the Twitter discussion on the 2016 Austrian presidential elections. In particular, we extracted and analyzed a data-set consisting of 343645 Twitter messages related to the 2016 Austrian presidential elections. Our analysis combines methods from network science, sentiment analysis, as well as bot detection. Among other things, we found that: a) the winner of the election (Alexander Van der Bellen) was considerably more popular and influential on Twitter than his opponent, b) the Twitter followers of Van der Bellen substantially participated in the spread of misinformation about him, c) there was a clear polarization in terms of the sentiments spread by Twitter followers of the two presidential candidates, d) the in-degree and out-degree distributions of the underlying communication network are heavy-tailed, and e) compared to other recent events, such as the 2016 Brexit referendum or the 2016 US presidential elections, only a very small number of bots participated in the Twitter discussion on the 2016 Austrian presidential election. | In the light of recent events (such as the 2016 US presidential elections and the 2016 Brexit referendum), numerous authors reported on the misuse of social media channels aiming to manipulate the voters' opinions (cf. @cite_31 @cite_22 @cite_62 ). However, such a misuse has already been reported much earlier. For example, in @cite_47 discuss the issue of political abuse by studying the spread of misinformation. The authors consider mood scores based on Google's Profile of Mood States (calm, alert, sure, vital, kind, happy) (see @cite_20 ) and basic properties of the hashtag network and user-mention (i.e. ) network topology to automatically classify messages as truthful or fake. Another study by @cite_68 investigates the characteristics of the spread of misinformation concerning Ebola by applying epidemiological modeling (see also @cite_27 ). Moreover, in @cite_57 Howard and Kollanyi report that bots have been used to amplify messages by raising their re-tweet count in order to influence the 2016 Brexit referendum. Even though Howard and Kollanyi identified only a small amount of bots participating in discussions about Brexit, those bots created a comparatively large amount of the overall content (about one third of all messages). | {
"cite_N": [
"@cite_31",
"@cite_62",
"@cite_22",
"@cite_57",
"@cite_27",
"@cite_47",
"@cite_68",
"@cite_20"
],
"mid": [
"2541054861",
"",
"",
"2466146218",
"2086627840",
"202178741",
"2018444966",
"2027860007"
],
"abstract": [
"",
"",
"",
"Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.\u0000",
"The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.",
"We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96 accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections.",
"A quantitative analysis of tweets during the Ebola crisis reveals that lies, half-truths, and rumors can spread just like true news.",
"Behavioral finance researchers can apply computational methods to large-scale social media data to better understand and predict markets."
]
} |
1707.09872 | 2742038226 | The current state-of-the-art for image annotation and image retrieval tasks is obtained through deep neural networks, which combine an image representation and a text representation into a shared embedding space. In this paper we evaluate the impact of using the Full-Network embedding in this setting, replacing the original image representation in a competitive multimodal embedding generation scheme. Unlike the one-layer image embeddings typically used by most approaches, the Full-Network embedding provides a multi-scale representation of images, which results in richer characterizations. To measure the influence of the Full-Network embedding, we evaluate its performance on three different datasets, and compare the results with the original multimodal embedding generation scheme when using a one-layer image embedding, and with the rest of the state-of-the-art. Results for image annotation and image retrieval tasks indicate that the Full-Network embedding is consistently superior to the one-layer embedding. These results motivate the integration of the Full-Network embedding on any multimodal embedding generation scheme, something feasible thanks to the flexibility of the approach. | Similarly to the approach of , most image annotation and image retrieval approaches rely on the use of CNN features for image representation. The current best overall performing model (considering both image annotation and image retrieval tasks) is the Fisher Vector (FV) @cite_24 , although its performance is most competitive on the image retrieval task. FV are computed with respect to the parameters of a Gaussian Mixture Model (GMM) and an Hybrid Gaussian-Laplacian Mixture Model (HGLMM). For both images and text, FV are build using deep neural network features; a VGG @cite_6 CNN for images features, and a word2vec @cite_9 for text features. For the specific problem of image annotation, the current state-of-art is obtained with the Word2VisualVec (W2VV) model @cite_23 . This approach uses as a multimodal embedding space the same visual space where images are represented, involving a deeper text processing. Finally for the largest dataset we consider (MSCOCO), the best results in certain metrics are obtained by MatchCNN (m-CNN) @cite_17 , which is based on the use of CNNs to encode both image and text. | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_17"
],
"mid": [
"2950133940",
"1686810756",
"1957706851",
"2558358930",
"2950012948"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"In recent years, the problem of associating a sentence with an image has gained a lot of attention. This work continues to push the envelope and makes further progress in the performance of image annotation and image search by a sentence tasks. In this work, we are using the Fisher Vector as a sentence representation by pooling the word2vec embedding of each word in the sentence. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). In this work we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. Finally, by using the new Fisher Vectors derived from HGLMMs to represent sentences, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks on four benchmarks: Pascal1K, Flickr8K, Flickr30K, and COCO.",
"This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec's properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.",
"In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances."
]
} |
1707.09472 | 2951165067 | This paper introduces a novel approach for modeling visual relations between pairs of objects. We call relation a triplet of the form (subject, predicate, object) where the predicate is typically a preposition (eg. 'under', 'in front of') or a verb ('hold', 'ride') that links a pair of objects (subject, object). Learning such relations is challenging as the objects have different spatial configurations and appearances depending on the relation in which they occur. Another major challenge comes from the difficulty to get annotations, especially at box-level, for all possible triplets, which makes both learning and evaluation difficult. The contributions of this paper are threefold. First, we design strong yet flexible visual features that encode the appearance and spatial configuration for pairs of objects. Second, we propose a weakly-supervised discriminative clustering model to learn relations from image-level labels only. Third we introduce a new challenging dataset of unusual relations (UnRel) together with an exhaustive annotation, that enables accurate evaluation of visual relation retrieval. We show experimentally that our model results in state-of-the-art results on the visual relationship dataset significantly improving performance on previously unseen relations (zero-shot learning), and confirm this observation on our newly introduced UnRel dataset. | Learning correspondences between fragments of sentences and image regions has been addressed by the visual-semantic alignment which has been used for applications in image retrieval and caption generation @cite_24 @cite_49 @cite_10 . With the appearance of new datasets providing box-level natural language annotations @cite_36 @cite_26 @cite_33 @cite_39 , recent works have also investigated caption generation at the level of image regions for the tasks of natural language object retrieval @cite_19 @cite_33 @cite_34 or dense captioning @cite_20 . Our approach is similar in the sense that we aim at aligning a language triplet with a pair of boxes in the image. Typically, existing approaches do not explicitly represent relations between noun phrases in a sentence to improve visual-semantic alignment. We believe that understanding these relations is the next step towards image understanding with potential applications in tasks such as Visual Question Answering @cite_43 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_36",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_43",
"@cite_49",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"2949474740",
"2144960104",
"2251512949",
"",
"2118714046",
"2963735856",
"2416885651",
"2951805548",
"",
"2953276893",
"2963758027"
],
"abstract": [
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.",
"We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox.",
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets.",
"",
"The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.",
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"Visual question answering is fundamentally compositional in nature---a question like \"where is the dog?\" shares substructure with questions like \"what color is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings."
]
} |
1707.09472 | 2951165067 | This paper introduces a novel approach for modeling visual relations between pairs of objects. We call relation a triplet of the form (subject, predicate, object) where the predicate is typically a preposition (eg. 'under', 'in front of') or a verb ('hold', 'ride') that links a pair of objects (subject, object). Learning such relations is challenging as the objects have different spatial configurations and appearances depending on the relation in which they occur. Another major challenge comes from the difficulty to get annotations, especially at box-level, for all possible triplets, which makes both learning and evaluation difficult. The contributions of this paper are threefold. First, we design strong yet flexible visual features that encode the appearance and spatial configuration for pairs of objects. Second, we propose a weakly-supervised discriminative clustering model to learn relations from image-level labels only. Third we introduce a new challenging dataset of unusual relations (UnRel) together with an exhaustive annotation, that enables accurate evaluation of visual relation retrieval. We show experimentally that our model results in state-of-the-art results on the visual relationship dataset significantly improving performance on previously unseen relations (zero-shot learning), and confirm this observation on our newly introduced UnRel dataset. | Most of the work on weakly-supervised learning for visual recognition has focused on learning objects @cite_12 @cite_41 @cite_21 . Here, we want to tackle the task of weakly-supervised detection of relations. This task is more complex as we need to detect the individual objects that satisfy the specific relation. We assume that pre-trained detectors for individual objects are available and learn relations among objects with image-level labels. Our work uses a discriminative clustering objective @cite_17 , that has been successful in several computer vision tasks @cite_1 @cite_50 , but has not been so far, to the best of our knowledge, used for modeling relations. | {
"cite_N": [
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_50",
"@cite_12",
"@cite_17"
],
"mid": [
"2949769367",
"1994488211",
"2949594863",
"",
"",
"2108282816"
],
"abstract": [
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"We are given a set of video clips, each one annotated with an ordered list of actions, such as \"walk\" then \"sit\" then \"answer phone\" extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies.",
"",
"",
"We present a novel linear clustering framework (DIFFRAC) which relies on a linear discriminative cost function and a convex relaxation of a combinatorial optimization problem. The large convex optimization problem is solved through a sequence of lower dimensional singular value decompositions. This framework has several attractive properties: (1) although apparently similar to K-means, it exhibits superior clustering performance than K-means, in particular in terms of robustness to noise. (2) It can be readily extended to non linear clustering if the discriminative cost function is based on positive definite kernels, and can then be seen as an alternative to spectral clustering. (3) Prior information on the partition is easily incorporated, leading to state-of-the-art performance for semi-supervised learning, for clustering or classification. We present empirical evaluations of our algorithms on synthetic and real medium-scale datasets."
]
} |
1707.09472 | 2951165067 | This paper introduces a novel approach for modeling visual relations between pairs of objects. We call relation a triplet of the form (subject, predicate, object) where the predicate is typically a preposition (eg. 'under', 'in front of') or a verb ('hold', 'ride') that links a pair of objects (subject, object). Learning such relations is challenging as the objects have different spatial configurations and appearances depending on the relation in which they occur. Another major challenge comes from the difficulty to get annotations, especially at box-level, for all possible triplets, which makes both learning and evaluation difficult. The contributions of this paper are threefold. First, we design strong yet flexible visual features that encode the appearance and spatial configuration for pairs of objects. Second, we propose a weakly-supervised discriminative clustering model to learn relations from image-level labels only. Third we introduce a new challenging dataset of unusual relations (UnRel) together with an exhaustive annotation, that enables accurate evaluation of visual relation retrieval. We show experimentally that our model results in state-of-the-art results on the visual relationship dataset significantly improving performance on previously unseen relations (zero-shot learning), and confirm this observation on our newly introduced UnRel dataset. | Zero-shot learning has been mostly explored for object classification @cite_45 @cite_31 @cite_2 @cite_8 and recently for the task of describing images with novel objects @cite_6 @cite_35 . In our work, we address zero-shot learning of relations in the form of triplets @math , where each term has already been seen independently during training, but not in that specific combination. We develop a model to detect and localize such zero-shot relations. | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_6",
"@cite_45",
"@cite_2",
"@cite_31"
],
"mid": [
"2463508871",
"",
"2952155606",
"2123024445",
"2950276680",
"2252238675"
],
"abstract": [
"Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.",
"",
"While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"Following up on recent work on establishing a mapping between vector-based semantic embeddings of words and the visual representations of the corresponding objects from natural images, we first present a simple approach to cross-modal vector-based semantics for the task of zero-shot learning, in which an image of a previously unseen object is mapped to a linguistic representation denoting its word. We then introduce fast mapping, a challenging and more cognitively plausible variant of the zero-shot task, in which the learner is exposed to new objects and the corresponding words in very limited linguistic contexts. By combining prior linguistic and visual knowledge acquired about words and their objects, as well as exploiting the limited new evidence available, the learner must learn to associate new objects with words. Our results on this task pave the way to realistic simulations of how children or robots could use existing knowledge to bootstrap grounded semantic knowledge about new concepts."
]
} |
1707.09376 | 2610200611 | Face deidentification is an active topic amongst privacy and security researchers. Early deidentification methods relying on image blurring or pixelisation have been replaced in recent years with techniques based on formal anonymity models that provide privacy guaranties and retain certain characteristics of the data even after deidentification. The latter aspect is important, as it allows the deidentified data to be used in applications for which identity information is irrelevant. In this work, the authors present a novel face deidentification pipeline, which ensures anonymity by synthesising artificial surrogate faces using generative neural networks (GNNs). The generated faces are used to deidentify subjects in images or videos, while preserving non-identity-related aspects of the data and consequently enabling data utilisation. Since generative networks are highly adaptive and can utilise diverse parameters (pertaining to the appearance of the generated output in terms of facial expressions, gender, race etc.), they represent a natural choice for the problem of face deidentification. To demonstrate the feasibility of the authors’ approach, they perform experiments using automated recognition tools and human annotators. Their results show that the recognition performance on deidentified images is close to chance, suggesting that the deidentification process based on GNNs is effective. | In this section we review the most important work related to our deidentification pipeline. For a more comprehensive review please refer to the surveys by Ribari ' c et al @cite_13 , @cite_4 . | {
"cite_N": [
"@cite_13",
"@cite_4"
],
"mid": [
"2416918403",
"1659411517"
],
"abstract": [
"Privacy is one of the most important social and political issues in our information society, characterized by a growing range of enabling and supporting technologies and services. Amongst these are communications, multimedia, biometrics, big data, cloud computing, data mining, internet, social networks, and audio-video surveillance. Each of these can potentially provide the means for privacy intrusion. De-identification is one of the main approaches to privacy protection in multimedia contents (text, still images, audio and video sequences and their combinations). It is a process for concealing or removing personal identifiers, or replacing them by surrogate personal identifiers in personal information in order to prevent the disclosure and use of data for purposes unrelated to the purpose for which the information was originally obtained. Based on the proposed taxonomy inspired by the Safe Harbour approach, the personal identifiers, i.e., the personal identifiable information, are classified as non-biometric, physiological and behavioural biometric, and soft biometric identifiers. In order to protect the privacy of an individual, all of the above identifiers will have to be de-identified in multimedia content. This paper presents a review of the concepts of privacy and the linkage among privacy, privacy protection, and the methods and technologies designed specifically for privacy protection in multimedia contents. The study provides an overview of de-identification approaches for non-biometric identifiers (text, hairstyle, dressing style, license plates), as well as for the physiological (face, fingerprint, iris, ear), behavioural (voice, gait, gesture) and soft-biometric (body silhouette, gender, age, race, tattoo) identifiers in multimedia documents. Privacy protection in multimedia.Taxonomy of the personal identifiers in multimedia contents.De-identification of non-biometrical identifiers.De-identification of physiological, behavioural and soft biometric identifiers.",
"Face-based identification is used in various application scenarios - from identification of a person based on still images in passport or identity card, to identification based on face images captured by a surveillance system without the cooperation of the person. In many application scenarios, especially in video surveillance, privacy can be compromised. One of the approaches to the preservation of privacy is de-identification, where de-identification is the process of concealing or removing personal identifiers, or replacing them with surrogate personal identifiers in personal information, captured in a multimedia content, in order to prevent the disclosure and use of data for purposes unrelated to the purpose for which the information was originally obtained. This paper presents a survey of approaches, methods and solutions for face de-identification in still images and videos."
]
} |
1707.09376 | 2610200611 | Face deidentification is an active topic amongst privacy and security researchers. Early deidentification methods relying on image blurring or pixelisation have been replaced in recent years with techniques based on formal anonymity models that provide privacy guaranties and retain certain characteristics of the data even after deidentification. The latter aspect is important, as it allows the deidentified data to be used in applications for which identity information is irrelevant. In this work, the authors present a novel face deidentification pipeline, which ensures anonymity by synthesising artificial surrogate faces using generative neural networks (GNNs). The generated faces are used to deidentify subjects in images or videos, while preserving non-identity-related aspects of the data and consequently enabling data utilisation. Since generative networks are highly adaptive and can utilise diverse parameters (pertaining to the appearance of the generated output in terms of facial expressions, gender, race etc.), they represent a natural choice for the problem of face deidentification. To demonstrate the feasibility of the authors’ approach, they perform experiments using automated recognition tools and human annotators. Their results show that the recognition performance on deidentified images is close to chance, suggesting that the deidentification process based on GNNs is effective. | Existing approaches to deidentification often implement formal privacy protection models such as @math -anonymity @cite_21 , @math -diversity @cite_11 , or @math -closeness @cite_25 . Among these, the @math -anonymity models have likely received the most attention in the area of face deidentification and resulted in the so-called @math -same family of algorithms @cite_1 , @cite_24 , @cite_19 . These algorithms operate on a closed set of static facial images and substitute each image in the set with the average of the closest @math identities computed from the same closed set of images. Because several images are replaced with the same average face, data anonymity of a certain level is guaranteed. A number of @math -same variants was presented in the literature, including the original @math -same algorithm @cite_1 , @math -same-select @cite_24 , and @math -same-model @cite_8 to name a few. The majority of these techniques is implemented using Active Appearance Models (AAMs). | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2159024459",
"2103958416",
"2897263486",
"2112941959",
"2136114025",
""
],
"abstract": [
"",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
"In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers' license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by deidentifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to deidentifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes, fail to thwart face recognition because of the robustness of face recognition methods. This work presents a new privacy-enabling algorithm, named k-Same, that guarantees face recognition software cannot reliably recognize deidentified faces, even though many facial details are preserved. The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels (k-Same-Pixel) or eigenvectors (k-Same-Eigen). Results are presented on a standard collection of real face images with varying k.",
"With the proliferation of inexpensive video surveillance and face recognition technologies, it is increasingly possible to track and match people as they move through public spaces. To protect the privacy of subjects visible in video sequences, prior research suggests using ad hoc obfuscation methods, such as blurring or pixelation of the face. However, there has been little investigation into how obfuscation influences the usability of images, such as for classification tasks. In this paper, we demonstrate that at high obfuscation levels, ad hoc methods fail to preserve utility for various tasks, whereas at low obfuscation levels, they fail to prevent recognition. To overcome the implied tradeoff between privacy and utility, we introduce a new algorithm, k-Same-Select, which is a formal privacy protection schema based on k-anonymity that provably protects privacy and preserves data utility. We empirically validate our findings through evaluations on the FERET database, a large real world dataset of facial images.",
"With the emergence of new applications centered around the sharing of image data, questions concerning the protection of the privacy of people visible in the scene arise. Recently, formal methods for the de-identification of images have been proposed which would benefit from multi-factor coding to separate identity and non-identity related factors. However, existing multi-factor models require complete labels during training which are often not available in practice. In this paper we propose a new multi-factor framework which unifies linear, bilinear, and quadratic models. We describe a new fitting algorithm which jointly estimates all model parameters and show that it outperforms the standard alternating algorithm. We furthermore describe how to avoid overfitting the model and how to train the model in a semi-supervised manner. In experiments on a large expression-variant face database we show that data coded using our multi-factor model leads to improved data utility while providing the same privacy protection.",
"The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain \"identifying\" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.",
""
]
} |
1707.09376 | 2610200611 | Face deidentification is an active topic amongst privacy and security researchers. Early deidentification methods relying on image blurring or pixelisation have been replaced in recent years with techniques based on formal anonymity models that provide privacy guaranties and retain certain characteristics of the data even after deidentification. The latter aspect is important, as it allows the deidentified data to be used in applications for which identity information is irrelevant. In this work, the authors present a novel face deidentification pipeline, which ensures anonymity by synthesising artificial surrogate faces using generative neural networks (GNNs). The generated faces are used to deidentify subjects in images or videos, while preserving non-identity-related aspects of the data and consequently enabling data utilisation. Since generative networks are highly adaptive and can utilise diverse parameters (pertaining to the appearance of the generated output in terms of facial expressions, gender, race etc.), they represent a natural choice for the problem of face deidentification. To demonstrate the feasibility of the authors’ approach, they perform experiments using automated recognition tools and human annotators. Their results show that the recognition performance on deidentified images is close to chance, suggesting that the deidentification process based on GNNs is effective. | Different from AAM-based deidentification approaches, Brki ' c et al @cite_16 propose a deidentification method based on style-transfer. The authors describe a pipeline that enables altering the appearance of faces in videos in such a way that an artistic style replacement is performed on the input data, thus making automatic recognition more difficult. Another interesting deidentification approach was presented by in @cite_0 , which, in contrary to most deidentification methods, hinders the recognition only for automatic recognition algorithms, but not human observers. The authors utilize projections on hyperspheres in order to defeat classifiers, while preserving enough visual information to enable human viewers to correctly identify individuals. | {
"cite_N": [
"@cite_0",
"@cite_16"
],
"mid": [
"1612002800",
"2484482231"
],
"abstract": [
"A major issue that arises from mass visual media distribution in modern video sharing, social media and cloud services, is the issue of privacy. Malicious users can use these services to track the actions of certain individuals and or groups thus violating their privacy. As a result the need to hinder automatic facial image identification in images and videos arises. In this paper we propose a method for de-identifying facial images. Contrary to most de-identification methods, this method manipulates facial images so that humans can still recognize the individual or individuals in an image or video frame, but at the same time common automatic identification algorithms fail to do so. This is achieved by projecting the facial images on a hypersphere. From the conducted experiments it can be verified that this method is effective in reducing the classification accuracy under 10 . Furthermore, in the resulting images the subject can be identified by human viewers.",
"We propose a computer vision-based pipeline that enables altering the appearance of faces in videos. Assuming a surveillance scenario, we combine GMM-based background subtraction with an improved version of the GrabCut algorithm to find and segment pedestrians. Independently, we detect faces using a standard face detector. We apply the neural art algorithm, utilizing the responses of a deep neural network to obfuscate the detected faces through style mixing with reference images. The altered faces are combined with the original frames using the extracted pedestrian silhouettes as a guideline. Experimental evaluation indicates that our method has potential in producing de-identified versions of the input frames while preserving the utility of the de-identified data."
]
} |
1707.09402 | 2742166337 | The NP-complete problem Feedback Vertex Set is that of deciding whether or not it is possible, for a given integer @math , to delete at most @math vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the @math -Colouring problem, or equivalently, to the problem of deciding whether or not a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for @math -free graphs. We prove that it is NP-complete if @math contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for @math -free graphs. We show that it remains polynomial-time solvable for @math -free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks whether or not a graph has an independent odd cycle transversal of size at most @math for a given integer @math . Finally, in line with our underlying research aim, we compare the complexity of Independent Feedback Vertex Set for @math -free graphs with the complexity of @math -Colouring, Independent Odd Cycle Transversal and other related problems. | Not every graph admits an independent feedback vertex set (consider complete graphs on at least four vertices). Graphs that do admit an independent feedback vertex set are said to be near-bipartite , and we can ask about the decision problem of recognising such graphs. Near-Bipartiteness a graph @math . is @math near-bipartite (that is, does @math have an independent feedback vertex set)? Near-Bipartiteness is -complete even for graphs of maximum degree @math @cite_0 and for graphs of diameter @math @cite_1 (see @cite_37 for a proof). Hence, by setting @math , we find that Independent Feedback Vertex Set is -complete for these two graph classes. The Independent Feedback Vertex Set problem is even -complete for planar bipartite graphs of maximum degree @math (see @cite_31 ). As bipartite graphs are near-bipartite, this result shows that there are classes of graphs where Independent Feedback Vertex Set is harder than Near-Bipartiteness . To obtain tractability results for Independent Feedback Vertex Set , we need to make some further assumptions. | {
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_37",
"@cite_1"
],
"mid": [
"2068819940",
"1491760138",
"2741305621",
"2726678275"
],
"abstract": [
"For a given graph G, if the vertices of G can be partitioned into an independent set and an acyclic set, then we call G a near-bipartite graph. This paper studies the recognition of near-bipartite graphs. We give simple characterizations for those near-bipartite graphs having maximum degree at most 3 and those having diameter 2. We also show that the recognition of near-bipartite graphs is NP-complete even for graphs where the maximum degree is 4 or where the diameter is 4.",
"",
"The Near-Bipartiteness problem is that of deciding whether or not the vertices of a graph can be partitioned into sets @math and @math , where @math is an independent set and @math induces a forest. The set @math in such a partition is said to be an independent feedback vertex set. Yang and Yuan proved that Near-Bipartiteness is polynomial-time solvable for graphs of diameter 2 and NP-complete for graphs of diameter 4. We show that Near-Bipartiteness is NP-complete for graphs of diameter 3, resolving their open problem. We also generalise their result for diameter 2 by proving that even the problem of computing a minimum independent feedback vertex is polynomial-time solvable for graphs of diameter 2.",
"We continue research into a well-studied family of problems that ask if the vertices of a graph can be partitioned into sets A and B, where A is an independent set and B induces a graph from some specified graph class G. We let G be the class of k-degenerate graphs. The problem is known to be polynomial-time solvable if k=0 (bipartite graphs) and NP-complete if k=1 (near-bipartite graphs) even for graphs of diameter 4, as shown by Yang and Yuan, who also proved polynomial-time solvability for graphs of diameter 2. We show that recognizing near-bipartite graphs of diameter 3 is NP-complete resolving their open problem. To answer another open problem, we consider graphs of maximum degree D on n vertices. We show how to find A and B in O(n) time for k=1 and D=3, and in O(n^2) time for k >= 2 and D >= 4. These results also provide an algorithmic version of a result of Catlin [JCTB, 1979] and enable us to complete the complexity classification of another problem: finding a path in the vertex colouring reconfiguration graph between two given k-colourings of a graph of bounded maximum degree."
]
} |
1707.09402 | 2742166337 | The NP-complete problem Feedback Vertex Set is that of deciding whether or not it is possible, for a given integer @math , to delete at most @math vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the @math -Colouring problem, or equivalently, to the problem of deciding whether or not a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for @math -free graphs. We prove that it is NP-complete if @math contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for @math -free graphs. We show that it remains polynomial-time solvable for @math -free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks whether or not a graph has an independent odd cycle transversal of size at most @math for a given integer @math . Finally, in line with our underlying research aim, we compare the complexity of Independent Feedback Vertex Set for @math -free graphs with the complexity of @math -Colouring, Independent Odd Cycle Transversal and other related problems. | One way is to consider the problem from a parameterized point of view. Taking @math as the parameter, Misra et al. @cite_42 proved that Independent Feedback Vertex Set is fixed-parameter tractable by giving a cubic kernel. This is in line with the fixed-parameter tractability of the general Feedback Vertex Set problem (see @cite_13 for the fastest known algorithm). Later, Agrawal et al. @cite_33 gave a faster algorithm for Independent Feedback Vertex Set and also obtained an upper bound on the number of minimal independent feedback vertex sets of a graph. | {
"cite_N": [
"@cite_42",
"@cite_13",
"@cite_33"
],
"mid": [
"2020889718",
"2084595628",
"2592840164"
],
"abstract": [
"We investigate a generalization of the classical Feedback Vertex Set (FVS) problem from the point of view of parameterized algorithms. Independent Feedback Vertex Set (IFVS) is the ''independent'' variant of the FVS problem and is defined as follows: given a graph G and an integer k, decide whether there exists [email protected]?V(G), |F|@?k, such that G[V(G)@?F] is a forest and G[F] is an independent set; the parameter is k. Note that the similarly parameterized version of the FVS problem-where there is no restriction on the graph G[F]-has been extensively studied in the literature. The connected variant CFVS-where G[F] is required to be connected-has received some attention as well. The FVS problem easily reduces to the IFVS problem in a manner that preserves the solution size, and so any algorithmic result for IFVS directly carries over to FVS. We show that IFVS can be solved in O(5^kn^O^(^1^)) time, where n is the number of vertices in the input graph G, and obtain a cubic (O(k^3)) kernel for the problem. Note the contrast with the CFVS problem, which does not admit a polynomial kernel unless [email protected]?NP Poly.",
"We present a new deterministic algorithm for the Feedback Vertex Set problem parameterized by the solution size. Our algorithm runs in O^@?((2+@f)^k) time, where @f<1.619 is the golden ratio, surpassing the previously fastest O^@?((1+22)^k)-time deterministic algorithm due to (2010) [6]. In our development we follow the approach of ; however, thanks to a new reduction rule, we obtain not only better dependency on the parameter in the running time, but also a solution with simple analysis and only a single branching rule.",
"In this paper we study the \"independent\" version of the classic Feedback Vertex Set problem in the realm of parameterized algorithms and moderately exponential time algorithms. More precisely, we study the Independent Feedback Vertex Set problem, where we are given an undirected graph G on n vertices and a positive integer k, and the objective is to check if there is an independent feedback vertex set of size at most k. A set S subseteq V(G) is called an independent feedback vertex set (ifvs) if S is an independent set and G is a forest. In this paper we design two deterministic exact algorithms for Independent Feedback Vertex Set with running times O*(4.1481^k) and O*(1.5981^n). In fact, the algorithm with O*(1.5981^n) running time finds the smallest sized ifvs, if an ifvs exists. Both the algorithms are based on interesting measures and improve the best known algorithms for the problem in their respective domains. In particular, the algorithm with running time O*(4.1481^k) is an improvement over the previous algorithm that ran in time O*(5^k). On the other hand, the algorithm with running time O*(1.5981^n) is the first moderately exponential time algorithm that improves over the naive algorithm that enumerates all the subsets of V(G). Additionally, we show that the number of minimal ifvses in any graph on n vertices is upper bounded by 1.7485^n."
]
} |
1707.09402 | 2742166337 | The NP-complete problem Feedback Vertex Set is that of deciding whether or not it is possible, for a given integer @math , to delete at most @math vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the @math -Colouring problem, or equivalently, to the problem of deciding whether or not a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for @math -free graphs. We prove that it is NP-complete if @math contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for @math -free graphs. We show that it remains polynomial-time solvable for @math -free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks whether or not a graph has an independent odd cycle transversal of size at most @math for a given integer @math . Finally, in line with our underlying research aim, we compare the complexity of Independent Feedback Vertex Set for @math -free graphs with the complexity of @math -Colouring, Independent Odd Cycle Transversal and other related problems. | Another way to obtain tractability results is to restrict the input to special graph classes in order to determine graph properties that make the problem polynomial-time solvable. We already mentioned some classes for which Independent Feedback Vertex Set is -complete. In a companion paper @cite_37 , we show that the problem is polynomial-time solvable for graphs of diameter @math , and as stated above, the problem is -complete on graphs of diameter @math . Tamura et al. @cite_31 showed that Independent Feedback Vertex Set is polynomial-time solvable for chordal graphs, graphs of bounded treewidth and for cographs. The latter graphs are also known as @math -free graphs ( @math denotes the path on @math vertices and a graph is @math -free if it has no induced subgraph isomorphic to @math ), and this strengthened a result of Brandst "adt et al. @cite_26 , who proved that Near-Bipartiteness is polynomial-time solvable for @math -free graphs. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_31"
],
"mid": [
"2741305621",
"2057355493",
"1491760138"
],
"abstract": [
"The Near-Bipartiteness problem is that of deciding whether or not the vertices of a graph can be partitioned into sets @math and @math , where @math is an independent set and @math induces a forest. The set @math in such a partition is said to be an independent feedback vertex set. Yang and Yuan proved that Near-Bipartiteness is polynomial-time solvable for graphs of diameter 2 and NP-complete for graphs of diameter 4. We show that Near-Bipartiteness is NP-complete for graphs of diameter 3, resolving their open problem. We also generalise their result for diameter 2 by proving that even the problem of computing a minimum independent feedback vertex is polynomial-time solvable for graphs of diameter 2.",
"A cycle transversal (or feedback vertex set) of a graph G is a subset T@?V(G) such that T@?V(C) 0@? for every cycle C of G. This work considers the problem of finding special cycle transversals in perfect graphs and cographs. We prove that finding a minimum cycle transversal T in a perfect graph G is NP-hard, even for bipartite graphs with maximum degree four. Since G-T is acyclic, this result implies that finding a maximum acyclic induced subgraph of a perfect graph is also NP-hard. Other special properties of T are considered. A clique (stable, respectively) cycle transversal, or simply cct (sct, respectively) is a cycle transversal which is a clique (stable set, respectively). Recognizing graphs which admit a cct can be done in polynomial time; however, no structural characterization of such graphs is known, even for perfect graphs. We characterize cographs with cct in terms of forbidden induced subgraphs and describe their structure. This leads to linear time recognition of cographs with cct. We also prove that deciding whether a perfect graph admits an sct is NP-complete. We characterize cographs with sct in terms of forbidden induced subgraphs; this characterization also leads to linear time recognition.",
""
]
} |
1707.09668 | 2741528525 | Nested parallelism exists in scientific codes that are searching multi-dimensional spaces. However, implementations of nested parallelism often have overhead and load balance issues. The Orbital Analysis code we present exhibits a sparse search space, significant load imbalances, and stopping when the first solution is reached. All these aspects of the algorithm exacerbate the problem of using nested parallelism effectively. In this paper, we present an inspector executor strategy for chunking such computations into parallel wavefronts. The presented shared memory parallelization is no longer nested and exhibits significantly less load imbalance. We evaluate this approach on an Orbital analysis code, and we improve the execution time from the original implementation by an order of magnitude. As part of a Graduate Computer Science course in Parallel Programming models, we show how the approach can be implemented in parallel Perl, Python, Chapel, Pthreads, and OpenMP. Future work includes investigating how to automate and generalize the parallelization approach. | There has been much work on improving the performance of nested parallelism. Blikberg and S revik @cite_19 argued for flattening nested parallelism into a single level of parallelism. They then have an approach for load balancing when there is some model of how much work each task is doing. In the orbital analysis application the amount of work that will be needed per particle cannot be modeled ahead of time. @cite_4 are dealing with the problem of expressing data affinity to the underlying runtime system. The orbital analysis program has severe load imbalance issues, not data locality issues. @cite_3 created a nested parallelism benchmark and then showed that many nested parallelism implementations of OpenMP have a lot of overhead. | {
"cite_N": [
"@cite_19",
"@cite_4",
"@cite_3"
],
"mid": [
"2124295435",
"2109566345",
"1534721345"
],
"abstract": [
"Many problems have multiple layers of parallelism. The outer-level may consist of few and coarse-grained tasks. Next, each of these tasks may also be rich in parallelism, and be split into a number of fine-grained tasks, which again may consist of even finer subtasks, and so on. Here we argue and demonstrate by examples that utilizing multiple layers of parallelism may give much better scaling than if one restricts oneself to only one level of parallelism. Two non-trivial issues for multi-level parallelism are load balancing and implementation. In this paper we provide an algorithm for finding good distributions of threads to tasks and discuss how to implement nested parallelism in OpenMP.",
"Exploiting the full computational power of always deeper hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture. The emergence of multi-core chips and NUMA machines makes it important to minimize the number of remote memory accesses, to favor cache affinities, and to guarantee fast completion of synchronization steps. By using the BubbleSched platform as a threading backend for the GOMP OpenMP compiler, we are able to easily transpose affinities of thread teams into scheduling hints using abstractions called bubbles. We then propose a scheduling strategy suited to nested OpenMP parallelism. The resulting preliminary performance evaluations show an important improvement of the speedup on a typical NAS OpenMP benchmark application.",
"In this work we present a microbenchmark methodology forassessing the overheads associated with nested parallelism in OpenMP.Our techniques are based on extensions to the well known EPCC microbenchmarksuite that allow measuring the overheads of OpenMPconstructs when they are effected in inner levels of parallelism. Themethodology is simple but powerful enough and has enabled us to gaininteresting insight into problems related to implementing and supportingnested parallelism. We measure and compare a number of commercialand freeware compilation systems. Our general conclusion is that whilenested parallelism is fortunately supported by many current implementations,the performance of this support is rather problematic. Thereseem to exist issues which have not yet been addressed effectively, asmost OpenMP systems do not exhibit a graceful reaction when made toexecute inner levels of concurrency."
]
} |
1707.09668 | 2741528525 | Nested parallelism exists in scientific codes that are searching multi-dimensional spaces. However, implementations of nested parallelism often have overhead and load balance issues. The Orbital Analysis code we present exhibits a sparse search space, significant load imbalances, and stopping when the first solution is reached. All these aspects of the algorithm exacerbate the problem of using nested parallelism effectively. In this paper, we present an inspector executor strategy for chunking such computations into parallel wavefronts. The presented shared memory parallelization is no longer nested and exhibits significantly less load imbalance. We evaluate this approach on an Orbital analysis code, and we improve the execution time from the original implementation by an order of magnitude. As part of a Graduate Computer Science course in Parallel Programming models, we show how the approach can be implemented in parallel Perl, Python, Chapel, Pthreads, and OpenMP. Future work includes investigating how to automate and generalize the parallelization approach. | A significant amount of research investigates the advantages and disadvantages of various programming languages in the context of scientific computing. @cite_5 describe various libraries and capabilities in Python such as NumPY and the ease of calling Fortran and C and how those impact the performance of stencil computations that occur when solving partial differential equation solvers. Many others have compared various parallel programming languages in terms of their performance and programmability with various benchmarks and applications @cite_7 @cite_1 @cite_11 @cite_0 @cite_21 @cite_12 . This study focuses on characterizing the workload and performance alternatives for implementing a specific analysis needed for the Large Synoptic Survey Telescope (LSST) project. | {
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2132896973",
"2062785300",
"2090409324",
"2133386804",
"2130604611",
"2148590584"
],
"abstract": [
"",
"As high-end computer systems present users with rapidly increasing numbers of processors, possibly also incorporating attached co-processors, programmers are increasingly challenged to express the necessary levels of concurrency with the dominant parallel programming model, Fortran+MPI+OpenMP (or minor variations). In this paper, we examine the languages developed under the DARPA High-Productivity Computing Systems (HPCS) program (Chapel, Fortress, and XIO) as representatives of a different parallel programming model which might be more effective on emerging high-performance systems. The application used in this study is the Hartree-Fock method from quantum chemistry, which combines access to distributed data with a task-parallel algorithm and is characterized by significant irregularity in the computational tasks. We present several different implementation strategies for load balancing of the task-parallel computation, as well as distributed array operations, in each of the three languages. We conclude that the HPCS languages provide a wide variety of mechanisms for expressing parallelism, which can be combined at multiple levels, making them quite expressive for this problem.",
"Historically, high performance computing has been measured in terms of peak or delivered performance, and to a lesser extent to performance to cost. Such metrics fail to capture the impact on the usefulness and ease of use of such systems. Productivity has been identified as a new parameter for high end computing systems that include both delivered system performance and the programmability of the system. System productivity is directly affected by many factors contributing to the achieved performance, the speed with which users construct application programs, and the availability of the system to perform user applications. This paper explores the concept of productivity as a quantifiable; parameter through a series of analytical models and considers the factors that contribute to it.",
"In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapel's features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model.",
"This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.",
"Parallel machines are becoming more complex with increasing core counts and more heterogeneous architectures. However, the commonly used parallel programming models, C C++ with MPI and or OpenMP, make it difficult to write source code that is easily tuned for many targets. Newer language approaches attempt to ease this burden by providing optimization features such as automatic load balancing, overlap of computation and communication, message-driven execution, and implicit data layout optimizations. In this paper, we compare several implementations of LULESH, a proxy application for shock hydrodynamics, to determine strengths and weaknesses of different programming models for parallel computation. We focus on four traditional (OpenMP, MPI, MPI+OpenMP, CUDA) and four emerging (Chapel, Charm++, Liszt, Loci) programming models. In evaluating these models, we focus on programmer productivity, performance and ease of applying optimizations.",
"Summary form only given. Parallel programming paradigms, over the past decade, have focused on how to harness the computational power of contemporary parallel machines. Ease of use and code development productivity, has been a secondary goal. Recently, however, there has been a growing interest in understanding the code development productivity issues and their implications for the overall time-to-solution. Unified Parallel C (UPC) is a recently developed language which has been gaining rising attention. UPC holds the promise of leveraging the ease of use of the shared memory model and the performance benefit of locality exploitation. The performance potential for UPC has been extensively studied in recent research efforts. The aim of this study, however, is to examine the impact of UPC on programmer productivity. We propose several productivity metrics and consider a wide array of high performance applications. Further, we compare UPC to the most widely used parallel programming paradigm, MPI. The results show that UPC compares favorably with MPI in programmers productivity."
]
} |
1707.09668 | 2741528525 | Nested parallelism exists in scientific codes that are searching multi-dimensional spaces. However, implementations of nested parallelism often have overhead and load balance issues. The Orbital Analysis code we present exhibits a sparse search space, significant load imbalances, and stopping when the first solution is reached. All these aspects of the algorithm exacerbate the problem of using nested parallelism effectively. In this paper, we present an inspector executor strategy for chunking such computations into parallel wavefronts. The presented shared memory parallelization is no longer nested and exhibits significantly less load imbalance. We evaluate this approach on an Orbital analysis code, and we improve the execution time from the original implementation by an order of magnitude. As part of a Graduate Computer Science course in Parallel Programming models, we show how the approach can be implemented in parallel Perl, Python, Chapel, Pthreads, and OpenMP. Future work includes investigating how to automate and generalize the parallelization approach. | @cite_8 describe a parallel algorithm implemented in MPI for the analysis of objects that are close to the earth. The algorithms in question are different than those we study in this paper. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2074345167"
],
"abstract": [
"Collision threat is becoming a serious problem since the amount of space debris increases fast recently. Besides, the detective capabilities are enhanced year by year. Thus calculation time cost on orbital data processing and close approach analysis gets much longer. Single processor cannot meet the requirements to keep the analysis results up to date. We review the procedure of close approach analysis and its time cost on single processor computer, and introduce several methods we've adopted to boost up the entire procedure. Finally we show a system of parallel computing environment on a high performance computer realized in National Astronomical Observatory for the purpose of collision risk assessment."
]
} |
1707.09695 | 2736329636 | 3D human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation. | Considerable research has addressed the challenge of 3D human pose estimation. Early research on 3D monocular pose estimation from videos involves frame-to-frame pose tracking and dynamic models that rely on Markov dependencies among previous frames, . @cite_39 @cite_21 . The main drawbacks of these approaches are the requirement of the initialization pose and the inability to recover from tracking failure. To overcome these drawbacks, more recently approaches @cite_13 @cite_9 focus on detecting candidate poses in each individual frames and a post-processing step attempts to establish temporal consistent poses. Yasin al @cite_36 proposed a dual-source approach for 3D pose estimation from a single image. They combined the 3D pose data from motion capture system with image source annotated with 2D pose. They transformed the estimation to a 3D pose retrieval problem. One major limitation of this approach is the time efficiency. It takes more than 20 seconds to process an image. Sanzari al @cite_16 proposed a hierarchical Bayesian non-parametric model, which relies on a representation of the idiosyncratic motion of human skeleton joints groups and the consistency of the connected group poses is taken into account when reconstructing the full-body pose. Their approach achieved state-of-the-art performance on the Human3.6M @cite_3 dataset. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_39",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2005124214",
"2071882725",
"2101032778",
"2039262381",
"2520324844",
"1997500560"
],
"abstract": [
"",
"Numerous ‘non-maximum suppression’ (NMS) post-processing schemes have been proposed for merging multiple independent object detections. We propose a generalization of NMS beyond bounding boxes to merge multiple pose estimates in a single frame. The final estimates are centroids rather than medoids as in standard NMS, thus being more accurate than any of the individual candidates. Using the same mathematical framework, we extend our approach to the multi-frame setting, merging multiple independent pose estimates across space and time and outputting both the number and pose of the objects present in a scene. Our approach sidesteps many of the inherent challenges associated with full tracking (e.g. objects entering leaving a scene, extended periods of occlusion, etc.). We show its versatility by applying it to two distinct state-of-the-art pose estimation algorithms in three domains: human bodies, faces and mice. Our approach improves both detection accuracy (by helping disambiguate correspondences) as well as pose estimation quality and is computationally efficient.",
"We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from \"bottom-up\" visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.",
"We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m .",
"Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose joint detector. 3D pose estimation is challenging because multiple 3D poses may correspond to the same 2D pose after projection due to the lack of depth information. Moreover, current 2D pose estimators are usually inaccurate which may cause errors in the 3D estimation. We address the challenges in three ways: (i) We represent a 3D pose as a linear combination of a sparse set of bases learned from 3D human skeletons. (ii) We enforce limb length constraints to eliminate anthropomorphically implausible skeletons. (iii) We estimate a 3D pose by minimizing the 1-norm error between the projection of the 3D pose and the corresponding 2D detection. The 1-norm loss term is robust to inaccurate 2D joint estimations. We use the alternating direction method (ADM) to solve the optimization problem efficiently. Our approach outperforms the state-of-the-arts on three benchmark datasets.",
"We introduce a 3D human pose estimation method from single image, based on a hierarchical Bayesian non-parametric model. The proposed model relies on a representation of the idiosyncratic motion of human body parts, which is captured by a subdivision of the human skeleton joints into groups. A dictionary of motion snapshots for each group is generated. The hierarchy ensures to integrate the visual features within the pose dictionary. Given a query image, the learned dictionary is used to estimate the likelihood of the group pose based on its visual features. The full-body pose is reconstructed taking into account the consistency of the connected group poses. The results show that the proposed approach is able to accurately reconstruct the 3D pose of previously unseen subjects.",
"Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions."
]
} |
1707.09695 | 2736329636 | 3D human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation. | Recently, deep learning has proven its ability in many computer vision tasks, such as the 3D human pose estimation. Li and Chan @cite_11 firstly used the CNNs to regress the 3D human pose from monocular images and proposed two training strategies to optimize the network. Li al @cite_8 proposed to integrate the structure-learning into deep learning framework, which consists of a convolutional neural network to extract image feature, and two following subnetworks to transform the image features and pose into a joint embedding. Tekin al @cite_38 proposed to exploit motion information from consecutive frames and applied a deep learning network to regress the 3D pose. Zhou al @cite_1 proposed a 3D pose estimation framework from videos that consists of a novel synthesis between a deep-learning-based 2D part detector, a sparsity-driven 3D reconstruction approach and a 3D temporal smoothness prior. Zhou al @cite_4 proposed to directly embed a kinematic object model into the deep learning. Du al @cite_0 introduced an additional built-in knowledge for reconstructing the 2D pose and formulated a new objective function to estimate 3D pose from detected 2D pose. | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_11"
],
"mid": [
"2270288817",
"2951206572",
"2949812103",
"2285449971",
"2519469348",
"2293220651"
],
"abstract": [
"We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.",
"Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.",
"This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.",
"This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.",
"The recovery of 3D human pose with monocular camera is an inherently ill-posed problem due to the large number of possible projections from the same 2D image to 3D space. Aimed at improving the accuracy of 3D motion reconstruction, we introduce the additional built-in knowledge, namely height-map, into the algorithmic scheme of reconstructing the 3D pose motion under a single-view calibrated camera. Our novel proposed framework consists of two major contributions. Firstly, the RGB image and its calculated height-map are combined to detect the landmarks of 2D joints with a dual-stream deep convolution network. Secondly, we formulate a new objective function to estimate 3D motion from the detected 2D joints in the monocular image sequence, which reinforces the temporal coherence constraints on both the camera and 3D poses. Experiments with HumanEva, Human3.6M, and MCAD dataset validate that our method outperforms the state-of-the-art algorithms on both 2D joints localization and 3D motion recovery. Moreover, the evaluation results on HumanEva indicates that the performance of our proposed single-view approach is comparable to that of the multi-view deep learning counterpart.",
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations."
]
} |
1707.09531 | 2964209717 | Since convolutional neural network (CNN) lacks an inherent mechanism to handle large scale variations, we always need to compute feature maps multiple times for multiscale object detection, which has the bottleneck of computational cost in practice. To address this, we devise a recurrent scale approximation (RSA) to compute feature map once only, and only through this map can we approximate the rest maps on other levels. At the core of RSA is the recursive rolling out mechanism: given an initial map on a particular scale, it generates the prediction on a smaller scale that is half the size of input. To further increase efficiency and accuracy, we (a): design a scale-forecast network to globally predict potential scales in the image since there is no need to compute maps on all levels of the pyramid. (b): propose a landmark retracing network (LRN) to retrace back locations of the regressed landmarks and generate a confidence score for each landmark; LRN can effectively alleviate false positives due to the accumulated error in RSA. The whole system could be trained end-to-end in a unified CNN framework. Experiments demonstrate that our proposed algorithm is superior against state-of-the-arts on face detection benchmarks and achieves comparable results for generic proposal generation. The source code of our system is available. | A single-scale detector detects the target at a typical scale and cannot handle features at other scales. An image pyramid is thus formulated and each level in the pyramid is fed into the detector. Such a framework appeared in pre-deep-learning era @cite_8 @cite_35 and usually involves hand-crafted features, HOG @cite_34 or SIFT @cite_6 , and some classifier like Adaboost @cite_13 , to verify whether the context at each scale contains a target object. Recently, some CNN-based methods @cite_29 @cite_17 also employ such a spirit to predict the objectness and class within a sliding window at each scale. In this way, the detector only handles features in a certain range of scales and the variance is taken over by the image pyramid, which could reduce the fitting difficulty for detector but potentially increase the computational cost. | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_29",
"@cite_6",
"@cite_34",
"@cite_13",
"@cite_17"
],
"mid": [
"204612701",
"2036989445",
"1934410531",
"2151103935",
"2161969291",
"2137401668",
"2963542991"
],
"abstract": [
"We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.",
"We describe a general method for building cascade classifiers from part-based deformable models such as pictorial structures. We focus primarily on the case of star-structured models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection by more than one order of magnitude without sacrificing detection accuracy. In our algorithm, partial hypotheses are pruned with a sequence of thresholds. In analogy to probably approximately correct (PAC) learning, we introduce the notion of probably approximately admissible (PAA) thresholds. Such thresholds provide theoretical guarantees on the performance of the cascade method and can be computed from a small sample of positive examples. Finally, we outline a cascade detection algorithm for a general class of models defined by a grammar formalism. This class includes not only tree-structured pictorial structures but also richer models that can represent each part recursively as a mixture of other parts.",
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.",
"Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."
]
} |
1707.09531 | 2964209717 | Since convolutional neural network (CNN) lacks an inherent mechanism to handle large scale variations, we always need to compute feature maps multiple times for multiscale object detection, which has the bottleneck of computational cost in practice. To address this, we devise a recurrent scale approximation (RSA) to compute feature map once only, and only through this map can we approximate the rest maps on other levels. At the core of RSA is the recursive rolling out mechanism: given an initial map on a particular scale, it generates the prediction on a smaller scale that is half the size of input. To further increase efficiency and accuracy, we (a): design a scale-forecast network to globally predict potential scales in the image since there is no need to compute maps on all levels of the pyramid. (b): propose a landmark retracing network (LRN) to retrace back locations of the regressed landmarks and generate a confidence score for each landmark; LRN can effectively alleviate false positives due to the accumulated error in RSA. The whole system could be trained end-to-end in a unified CNN framework. Experiments demonstrate that our proposed algorithm is superior against state-of-the-arts on face detection benchmarks and achieves comparable results for generic proposal generation. The source code of our system is available. | A multi-scale detector takes one shot for the image and generates detection results aross all scales. RPN @cite_33 and YOLO @cite_0 have fixed size of the input scale, and proposals for all scales are generated in the final layer by using multiple classifiers. However, it is not easy to detect objects in various scales based on the final feature map. Liu @cite_25 resolved the problem via a multi-level combination of predictions from feature maps on different scales. And yet it still needs a large model for large receptive field for detection. Other works @cite_19 @cite_15 proposed to merge deep and shallow features in a conv deconv structure and to merge boxes for objects from different scales. These methods are usually faster than the single-scale detector since it only takes one shot for image, but the large-scale invariance has to be learned by an expensive feature classifier, which is unstable and heavy. | {
"cite_N": [
"@cite_33",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_25"
],
"mid": [
"2613718673",
"2963037989",
"2949533892",
"2593079592",
"2193145675"
],
"abstract": [
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"In this paper, we propose a zoom-out-and-in network for generating object proposals. We utilize different resolutions of feature maps in the network to detect object instances of various sizes. Specifically, we divide the anchor candidates into three clusters based on the scale size and place them on feature maps of distinct strides to detect small, medium and large objects, respectively. Deeper feature maps contain region-level semantics which can help shallow counterparts to identify small objects. Therefore we design a zoom-in sub-network to increase the resolution of high level features via a deconvolution operation. The high-level features with high resolution are then combined and merged with low-level features to detect objects. Furthermore, we devise a recursive training pipeline to consecutively regress region proposals at the training stage in order to match the iterative regression at the testing stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET and MS COCO datasets, where our algorithm performs better than the state-of-the-arts in various evaluation metrics. It also increases average precision by around 2 in the detection system.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
} |
1707.09531 | 2964209717 | Since convolutional neural network (CNN) lacks an inherent mechanism to handle large scale variations, we always need to compute feature maps multiple times for multiscale object detection, which has the bottleneck of computational cost in practice. To address this, we devise a recurrent scale approximation (RSA) to compute feature map once only, and only through this map can we approximate the rest maps on other levels. At the core of RSA is the recursive rolling out mechanism: given an initial map on a particular scale, it generates the prediction on a smaller scale that is half the size of input. To further increase efficiency and accuracy, we (a): design a scale-forecast network to globally predict potential scales in the image since there is no need to compute maps on all levels of the pyramid. (b): propose a landmark retracing network (LRN) to retrace back locations of the regressed landmarks and generate a confidence score for each landmark; LRN can effectively alleviate false positives due to the accumulated error in RSA. The whole system could be trained end-to-end in a unified CNN framework. Experiments demonstrate that our proposed algorithm is superior against state-of-the-arts on face detection benchmarks and achieves comparable results for generic proposal generation. The source code of our system is available. | Recent years have witnessed a performance boost in face detection, which takes advantage of the development in fully convolutional network @cite_31 @cite_16 @cite_21 @cite_1 . Multi-task RPN is applied @cite_3 @cite_32 @cite_12 @cite_30 to generate face confidence and landmarks together. Both single-scale and multi-scale strategies are introduced in these methods. For example, Chen @cite_3 propose a supervised spatial transform layer to utilize landmark information and thus enhance the quality of detector by a large margin. | {
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_31",
"@cite_16",
"@cite_12"
],
"mid": [
"2963770578",
"2504335775",
"345900524",
"2963377935",
"2495387757",
"1970456555",
"2417750831",
"2585123518"
],
"abstract": [
"Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detection (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD.",
"In present object detection systems, the deep convolutional neural networks (CNNs) are utilized to predict bounding boxes of object candidates, and have gained performance advantages over the traditional region proposal methods. However, existing deep CNN methods assume the object bounds to be four independent variables, which could be regressed by the l2 loss separately. Such an oversimplified assumption is contrary to the well-received observation, that those variables are correlated, resulting to less accurate localization. To address the issue, we firstly introduce a novel Intersection over Union (IoU) loss function for bounding box prediction, which regresses the four bounds of a predicted box as a whole unit. By taking the advantages of IoU loss and deep fully convolutional networks, the UnitBox is introduced, which performs accurate and efficient localization, shows robust to objects of varied shapes and scales, and converges fast. We apply UnitBox on face detection task and achieve the best performance among all published methods on the FDDB benchmark.",
"Deep learning methods are powerful tools but often suffer from expensive computation and limited flexibility. An alternative is to combine light-weight models with deep representations. As successful cases exist in several visual problems, a unified framework is absent. In this paper, we revisit two widely used approaches in computer vision, namely filtered channel features and Convolutional Neural Networks (CNN), and absorb merits from both by proposing an integrated method called Convolutional Channel Features (CCF). CCF transfers low-level features from pre-trained CNN models to feed the boosting forest model. With the combination of CNN features and boosting forest, CCF benefits from the richer capacity in feature representation compared with channel features, as well as lower cost in computation and storage compared with end-to-end CNN methods. We show that CCF serves as a good way of tailoring pre-trained CNN models to diverse tasks without fine-tuning the whole network to each task by achieving state-of-the-art performances in pedestrian detection, face detection, edge detection and object proposal generation.",
"We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.",
"Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.",
"In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.",
"This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.",
"Abstract In this paper, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detection benchmark evaluation. In particular, we improve the state-of-the-art Faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pre-training, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance and was ranked as one of the best models in terms of ROC curves of the published methods on the FDDB benchmark. 1"
]
} |
1707.09538 | 2739558176 | We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10 over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks. | Text-based sentiment analysis systems can be broadly categorized into knowledge-based and statistics-based systems @cite_11 . While the use of knowledge bases was initially more popular for the identification of emotions and polarity in text, sentiment analysis researchers have recently been using statistics-based approaches, with a special focus on supervised statistical methods @cite_8 @cite_13 . | {
"cite_N": [
"@cite_8",
"@cite_13",
"@cite_11"
],
"mid": [
"2166706824",
"2251939518",
""
],
"abstract": [
"We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
""
]
} |
1707.09538 | 2739558176 | We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10 over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks. | In 1970, @cite_15 carried out extensive studies on facial expressions. Their research showed that universal facial expressions are able to provide sufficient clues to detect emotions. Recent studies on speech-based emotion analysis @cite_0 have focused on identifying relevant acoustic features, such as fundamental frequency (pitch), intensity of utterance, bandwidth, and duration. | {
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"1601218598",
"99130875"
],
"abstract": [
"The paper describes a novel technique for the recognition of emotions from multimodal data. We focus on the recognition of the six prototypic emotions. The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion of the subject. Two types of models based on geometric face features for facial expression recognition are being used, depending on the presence or absence of speech. In our approach we define an algorithm that is robust to changes of face shape that occur during regular speech. The influence of phoneme generation on the face shape during speech is removed by using features that are only related to the eyes and the eyebrows. The paper includes results from testing the presented models.",
"A device for use with a sewing machine using bobbins to determine either when a predetermined amount of bobbin thread remains on the bobbin or, alternatively, when the bobbin is completely empty. The device employs a probe which is inserted into the bobbin when the sewing machine is not operating and will thereafter produce a signal if the bobbin is low on thread or empty."
]
} |
1707.09538 | 2739558176 | We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10 over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks. | While there are many research papers on audio-visual fusion for emotion recognition, only a few research works have been devoted to multimodal emotion or sentiment analysis using textual clues along with visual and audio modalities. @cite_22 and @cite_12 fused information from audio, visual and textual modalities to extract emotion and sentiment. Met @cite_4 and @cite_19 fused audio and textual modalities for emotion recognition. Both approaches relied on feature-level fusion. @cite_18 fused audio and textual clues at decision level. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_19",
"@cite_12"
],
"mid": [
"",
"2137454998",
"2079725295",
"1973270182",
"2533262878"
],
"abstract": [
"",
"Emotion expression associated with human communication is known to be a multimodal process. In this work, we investigate the way that emotional information is conveyed by facial and vocal modalities, and how these modalities can be effectively combined to achieve improved emotion recognition accuracy. In particular, the behaviors of different facial regions are studied in detail. We analyze an emotion database recorded from ten speakers (five female, five male), which contains speech and facial marker data. Each individual modality is modeled by Gaussian mixture models (GMMs). Multiple modalities are combined using two different methods: a Bayesian classifier weighting scheme and support vector machines that use post classification accuracies as features. Individual modality recognition performances indicate that anger and sadness have comparable accuracies for facial and vocal modalities, while happiness seems to be more accurately transmitted by facial expressions than voice. The neutral state has the lowest performance, possibly due to the vague definition of neutrality. Cheek regions achieve better emotion recognition accuracy compared to other facial regions. Moreover, classifier combination leads to significantly higher performance, which confirms that training detailed single modality classifiers and combining them at a later stage is an effective approach.",
"This work focuses on automatically analyzing a speaker's sentiment in online videos containing movie reviews. In addition to textual information, this approach considers adding audio features as typically used in speech-based emotion recognition as well as video features encoding valuable valence information conveyed by the speaker. Experimental results indicate that training on written movie reviews is a promising alternative to exclusively using (spoken) in-domain data for building a system that analyzes spoken movie review videos, and that language-independent audio-visual analysis can compete with linguistic analysis.",
"For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user’s affect. However most emotion recognition systems are based on turnwise processing. We present a novel approach to on-line emotion recognition from speech using Long Short-Term Memory Recurrent Neural Networks. Emotion is recognised frame-wise in a two-dimensional valence-activation continuum. In contrast to current state-of-the-art approaches, recognition is performed on low-level signal frames, similar to those used for speech recognition. No statistical functionals are applied to low-level feature contours. Framing at a higher level is therefore unnecessary and regression outputs can be produced in real-time for every low-level input frame. We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.",
"In this paper we address the sentence-level multi-modal emotion recognition problem. We formulate the emotion recognition task as a multi-category classification problem and propose an innovative solution based on the automatically generated ensemble of trees with binary support vector machines (SVM) classifiers in the tree nodes. We demonstrate the efficacy of our approach by performing four-way (anger, happiness, sadness, neutral) and five-way (including excitement) emotion recognition on the University of Southern California's Interactive Emotional Motion Capture (USC-IEMOCAP) corpus using combinations of acoustic features, lexical features extracted from automatic speech recognition (ASR) output and visual features extracted from facial markers traced by a motion capture system. The experiments show that the proposed ensemble of trees of binary SVM classifiers outperforms classical multi-way SVM classification with one-vs-one voting scheme and achieves state-of-the-art results for all feature combinations."
]
} |
1707.09661 | 2740516376 | ANGELINA is an automated game design system which has previously been built as a single software block which designs games from start to finish. In this paper we outline a roadmap for the development of a new version of ANGELINA, designed to iterate on games in different ways to produce a continuous creative process that will improve the quality of its work, but more importantly improve the perception of the software as being an independently creative piece of software. We provide an initial report of the system's structure here as well as results from the first working module of the system. | Automated game design is often conflated with the notion of ruleset design. One explanation for this is the bias in games research towards a particular classical' concept of what a game is. Early research in AI focused on abstract games that are almost entirely described by a set of rules and nothing else (Checkers, Chess and Go being the best examples here). As games research moved more into digital games, classic arcade stereotypes replaced this notion; games with very strong notions of winning, losing, scoring. We can see this trend continuing to the modern day in the design and influences of the Video Game Description Language, VGDL @cite_6 . | {
"cite_N": [
"@cite_6"
],
"mid": [
"2028208918"
],
"abstract": [
"We propose a powerful new tool for conducting research on computational intelligence and games. PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search."
]
} |
1707.09661 | 2740516376 | ANGELINA is an automated game design system which has previously been built as a single software block which designs games from start to finish. In this paper we outline a roadmap for the development of a new version of ANGELINA, designed to iterate on games in different ways to produce a continuous creative process that will improve the quality of its work, but more importantly improve the perception of the software as being an independently creative piece of software. We provide an initial report of the system's structure here as well as results from the first working module of the system. | In terms of digital games, the Game-O-Matic is a vital part of automated game design history @cite_5 . Ostensibly developed as a mixed-initiative tool for journalists to rapidly create newsgames, the Game-O-Matic is effectively an entirely autonomous game designer, with a very broad understanding of game mechanics and also how to convey messages through the combination of game rules. This meaning-driven game design is reinforced by several mechanisms that allow the Game-O-Matic to retrieve artistic assets to reinforce the game's systems visually. Along similar lines, work by Nelson and Mateas in @cite_12 also shows a concerted effort to build a system which can combine systems, meaning and visuals to convey something through an interactive experience. | {
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"2008373880",
"2020362239"
],
"abstract": [
"Micro-rhetorics are the representational units of meaning that emerge from the rhetorical affordances of videogame mechanics, abstract gameplay patterns, and thematic depiction. This paper explains the concept of micro-rhetorics, how game dynamics can be interpreted, and how designers can make use of game mechanics to express ideas through simple videogames. This theoretical framework is informed by the design of Game-O-Matic, a videogame authoring tool that generates games to represent ideas. It takes a network of basic relationships between actors and assembles simple arcade-style game mechanics into videogames that are able to make arguments and depict ideas.",
"Game-design novices increasingly hope to use game-like expression as a way to express content such as opinions and educational material. Existing game-design toolkits such as Game Maker ease the programming burden, bringing the design of small games within the technical reach of low-budget, non-expert groups. The design process itself remains a roadblock, however: It is not at all obvious how to present topics such as political viewpoints or bike safety in a way that makes use of the unique qualities of the interactive game medium. There are no tools to assist in that aspect of the game design process, and as a result virtually all expressive games come from a small number of game studios founded by experts in designing such games. We propose a game-design assistant that acts in a mixed-initiative fashion, helping the author understand the content of her design-in-progress, providing suggestions or automating the process where possible, and even offering the possibility for parts of the game to be dynamically generated at runtime in response to player interaction. We describe a prototype system that interactively helps authors define spaces of games in terms of common-sense constraints on their real-world references, provides support for them to understand and iteratively refine such spaces, and realizes specific games from the spaces as playable mobile-phone games in response to user input."
]
} |
1707.09661 | 2740516376 | ANGELINA is an automated game design system which has previously been built as a single software block which designs games from start to finish. In this paper we outline a roadmap for the development of a new version of ANGELINA, designed to iterate on games in different ways to produce a continuous creative process that will improve the quality of its work, but more importantly improve the perception of the software as being an independently creative piece of software. We provide an initial report of the system's structure here as well as results from the first working module of the system. | There are also a range of tools that, while not built as autonomous designers, are sufficiently close in nature that they might only be a few small changes from being such. Tools like @cite_10 use a lot of automation and self-analysis to provide support to a human user, or the @cite_7 which amplifies smaller creative inputs with embellishments and analysis, and @cite_11 , a similar tool targeting physics-based mobile games like . These tools are performing a lot of the design work going on in their domain, even if the project's intention is not to fully automate the task. Often, the need to support humans in any part of a creative tasks means the system must, by definition, be able to perform every part of the task on its own. Such systems are usually only lacking in some higher-level control and production, and are otherwise very close to automated game design tools. | {
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_11"
],
"mid": [
"2111393845",
"2395774634",
"2228991311"
],
"abstract": [
"Tanagra is a prototype mixed-initiative design tool for 2D platformer level design, in which a human and computer can work together to produce a level. The human designer can place constraints on a continuously running level generator, in the form of exact geometry placement and manipulation of the level's pacing. The computer then fills in the rest of the level with geometry that guarantees playability, or informs the designer that there is no level that meets their requirements. This paper presents the design of Tanagra, a discussion of the editing operations it provides to the designer, and an evaluation of the expressivity of its generator.",
"This paper introduces the Sentient Sketchbook, a tool which supports a designer in the creation of game levels. Using map sketches to alleviate designer effort, the tool automates playability checks and evaluations and visualizes significant gameplay properties. This paper also introduces constrained novelty search via a two-population paradigm for generating, in real-time, alternatives to the author’s design and evaluates its potential against current approaches. The paper concludes with a small-scale user study in which industry experts interact with the Sentient Sketchbook to design game levels. Results demonstrate the tool’s potential and provide directions for its improvement.",
"We present a demonstration of Ropossum, an authoring tool for the generation and testing of levels of the physics-based game, Cut the Rope. Ropossum integrates many features: (1) automatic design of complete solvable content, (2) incorporation of designer's input through the creation of complete or partial designs, (3) automatic check for playability and (4) optimization of a given design based on playability. The system includes a physics engine to simulate the game and an evolutionary framework to evolve content as well as an AI reasoning agent to check for playability. The system is optimised to allow on-line feedback and realtime interaction."
]
} |
1707.09112 | 2973620064 | Matrix recovery is raised in many areas. In this paper, we build up a framework for almost everywhere matrix recovery which means to recover almost all the @math from @math where @math . We mainly focus on the following question: how many measurements are needed to recover almost all the matrices in @math ? For the case where both @math and @math are algebraic varieties, we use the tools from algebraic geometry to study the question and present some results to address it under many different settings. | In the context of matrix recovery, one already presents many conditions under which @math has @math -recovery property @cite_3 @cite_9 @cite_7 @cite_0 . In @cite_3 , it is proved that if @math and @math are Gaussian random matrices, then @math has @math -recovery property with probability 1. In @cite_3 , Eldar, Needell and Plan conjecture the measurement number @math is tight. In @cite_8 , Xu confirm the conjecture for the case @math and also disprove it for @math . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0"
],
"mid": [
"2167077875",
"2963848703",
"",
"1968843188",
"1657130172"
],
"abstract": [
"In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.",
"Abstract The paper presents several results that address a fundamental question in low-rank matrix recovery: how many measurements are needed to recover low-rank matrices? We begin by investigating the complex matrices case and show that 4 n r − 4 r 2 generic measurements are both necessary and sufficient for the recovery of rank-r matrices in C n × n . Thus, we confirm a conjecture which is raised by Eldar, Needell and Plan for the complex case. We next consider the real case and prove that the bound 4 n r − 4 r 2 is tight provided n = 2 k + r , k ∈ Z + . Motivated by Vinzant's work [19] , we construct 11 matrices in R 4 × 4 by computer random search and prove they define injective measurements on rank-1 matrices in R 4 × 4 . This disproves the conjecture raised by Eldar, Needell and Plan for the real case. Finally, we use the results in this paper to investigate the phase retrieval by projection and show fewer than 2 n − 1 orthogonal projections are possible for the recovery of x ∈ R n from the norm of them, which gives a negative answer for a question raised in [1] .",
"",
"Abstract Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear measurements. There has been a large influx of literature deriving conditions under which certain tractable methods will succeed in recovery, demonstrating that m ⩾ C n r Gaussian measurements are often sufficient to recover any rank-r n × n matrix. In this paper we address the theoretical question of how many measurements are needed via any method whatsoever — tractable or not. We show that for a family of random measurement ensembles, m ⩾ 4 n r − 4 r 2 and m ⩾ 2 n r − r 2 + 1 measurements are sufficient to guarantee strong recovery and weak recovery, respectively, by rank minimization. These results give a benchmark to which we may compare the efficacy of tractable methods such as nuclear-norm minimization.",
"This paper establishes information-theoretic limits for estimating a finite-field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse-a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n × n-sensing matrices contain, on average, Ω(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the aforementioned results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive minimum distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one."
]
} |
1707.09112 | 2973620064 | Matrix recovery is raised in many areas. In this paper, we build up a framework for almost everywhere matrix recovery which means to recover almost all the @math from @math where @math . We mainly focus on the following question: how many measurements are needed to recover almost all the matrices in @math ? For the case where both @math and @math are algebraic varieties, we use the tools from algebraic geometry to study the question and present some results to address it under many different settings. | Under the setting of @math and @math with @math , @math has the almost everywhere @math -recovery property if and only if @math has the almost phase retrieval property. It is an active topic to present the smallest @math for which @math having the almost phase retrieval property @cite_18 @cite_19 @cite_2 @cite_14 . For the case where @math , it is known that @math is sufficient and necessary. For @math , it is known that @math generic measurements are sufficient for almost phase retrieval (see @cite_18 ). However, one still does not know whether @math is tight or not. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_14",
"@cite_2"
],
"mid": [
"2964095152",
"2167850383",
"1543885334",
"1854343180"
],
"abstract": [
"Abstract In many applications, signals are measured according to a linear process, but the phases of these measurements are often unreliable or not available. To reconstruct the signal, one must perform a process known as phase retrieval. This paper focuses on completely determining signals with as few intensity measurements as possible, and on efficient phase retrieval algorithms from such measurements. For the case of complex M -dimensional signals, we construct a measurement ensemble of size 4 M − 4 which yields injective intensity measurements; this is conjectured to be the smallest such ensemble. For the case of real signals, we devise a theory of “almost” injective intensity measurements, and we characterize such ensembles. Later, we show that phase retrieval from M + 1 almost injective intensity measurements is NP -hard, indicating that computationally efficient phase retrieval must come at the price of measurement redundancy.",
"We will construct new classes of Parseval frames for a Hilbert space which allow signal reconstruction from the absolute value of the frame coefficients. As a consequence, signal reconstruction can be done without using phase or its estimation. This verifies a longstanding conjecture of the speech processing community.",
"Consider a scenario in which an unknown signal is transformed by a known linear operator, and then the pointwise absolute value of the unknown output function is reported. This scenario appears in several applications, and the goal is to recover the unknown signal – this is called phase retrieval. Phase retrieval has been a popular subject of research in the last few years, both in determining whether complete information is available with a given linear operator and in finding efficient and stable phase retrieval algorithms in the cases where complete information is available. Interestingly, there are a few ways to measure information completeness, and each way appears to be governed by a phase transition of sorts. This chapter will survey the state of the art with some of these phase transitions, and identify a few open problems for further research.",
"I construct a positive-operator-valued measure (POVM) which has 2d rank-1 elements and which is informationally complete for generic pure states in d dimensions, thus confirming a conjecture made by Flammia, Silberfarb, and Caves (e-print quant-ph 0404137). I show that if a rank-1 POVM is required to be informationally complete for all pure states in d dimensions, it must have at least 3d-2 elements. I also show that, in a POVM which is informationally complete for all pure states in d dimensions, for any vector there must be at least 2d-1 POVM elements which do not annihilate that vector."
]
} |
1707.09315 | 2741803124 | In recent years, the networks of low-power devices have gained popularity. Typically these devices are wireless and interact to form large networks such as the Machine to Machine (M2M) networks, Internet of Things (IoT), Wearable Computing, and Wireless Sensor Networks. The collaboration among these devices is a key to achieving the full potential of these networks. A major problem in this field is to guarantee robust communication between elements while keeping the whole network energy efficient. In this paper, we introduce an extended and improved emergent broadcast slot (EBS) scheme, which facilitates collaboration for robust communication and is energy efficient. In the EBS, nodes communication unit remains in sleeping mode and are awake just to communicate. The EBS scheme is fully decentralized, that is, nodes coordinate their wake-up window in partially overlapped manner within each duty-cycle to avoid message collisions. We show the theoretical convergence behavior of the scheme, which is confirmed through real test-bed experimentation. | In general, low-power nodes consist of relatively inexpensive hardware components, e.g. clocks that typically drift. Therefore, network-wide synchronization is required, thereby increasing message overheads as well as temporal and spatial instability in the network @cite_18 @cite_17 . Using a single central device to enforcing synchronization by dictating the time is not ideal as it imposes extra messages overload @cite_10 @cite_48 . Moreover, time delay for the synchronization message keeps incrementing with every hop. An additional challenge is the presence of lossy wireless links means that the synchronization packets may be dropped and would have to be sent multiple times. | {
"cite_N": [
"@cite_48",
"@cite_18",
"@cite_10",
"@cite_17"
],
"mid": [
"2122721472",
"2143626467",
"",
"2606426586"
],
"abstract": [
"A cluster based hierarchical wireless sensor network architecture is proposed to facilitate more than one application sharing the whole or a part of a wireless sensor network, where each application may have its own network, processing requirements and protocols. Such a network is divided into clusters and the clusters are organized hierarchically. For synchronizing the CHWSN there is the requirement for a cluster based hierarchical time synchronization algorithm. None of the already existing time synchronization algorithms satisfy the needs of time synchronization in our CHWSN architecture. Thus the existing time synchronization algorithms TPSN (time synchronization protocol for sensor networks) and FTSP (Flooding Time Synchronization Protocol) are modified to fulfill the needs of time synchronization in CHWSN architecture and developed the cluster based hierarchical, flooding time synchronization algorithm (CBH-FTS). It is a hybrid algorithm, where instead of flooding the synchronization messages to neighbor's node, the root node multicasts the time-sync message to selected cluster-heads using the relevant semantics. Hierarchy of cluster-heads could transmit the synchronization messages down the hierarchy of cluster-heads thus synchronizing only the required part of the network associated with an application. The synchronization could be a result of a decision at root node or could be a result of a request for synchronization from a node to a cluster-head.The CBH-FTS Protocol is semantic driven and covers multiple levels in hierarchy.",
"Recently, a time synchronization algorithm called pairwise broadcast synchronization (PBS) is proposed. With PBS, a sensor can be synchronized by overhearing synchronization packet exchange among its neighbouring sensors without sending out any packet itself. In an one-hop sensor network where every node is a neighbour of each other, a single PBS message exchange between two nodes would facilitate all nodes to synchronize. However, in a multi-hop sensor network, PBS message exchanges in several node pairs are needed in order to achieve network-wide synchronization. To reduce the number of message exchanges, these node pairs should be carefully chosen. In this paper, we investigate how to choose these ldquoappropriaterdquo sensors aiming at reducing the number of PBS message exchanges while allowing every node to synchronize. This selection problem is shown to be NP-complete, for which the greedy heuristic is a good polynomial-time approximation algorithm. Nevertheless, a centralized algorithm is not suitable for wireless sensor networks. Therefore, we develop a distributed heuristic algorithm allowing a sensor to determine how to synchronize itself based on its neighbourhood information only. The protocol is tested through extensive simulations. The simulation results reveal that the proposed protocol gives consistent performance under different conditions with its performance comparable to that of the centralized algorithm.",
"",
"In this paper, we review advanced synchronization techniques for the internet of things. Our study can be directly applied to the single-carrier IEEE 802.15.4 and IEEE 802.15.6 standards. In particular we display the advantages of using a Code-Aided approach and a Bayesian packet-oriented approach. The performances of several phase-locked loop techniques are compared to different Cramer Rao Bounds so that one can see the advantage of respectively the Bayesian approach compared to the on-line approach and of the Code Aided technique compared to the Non Data Aided Data Aided approach."
]
} |
1707.09315 | 2741803124 | In recent years, the networks of low-power devices have gained popularity. Typically these devices are wireless and interact to form large networks such as the Machine to Machine (M2M) networks, Internet of Things (IoT), Wearable Computing, and Wireless Sensor Networks. The collaboration among these devices is a key to achieving the full potential of these networks. A major problem in this field is to guarantee robust communication between elements while keeping the whole network energy efficient. In this paper, we introduce an extended and improved emergent broadcast slot (EBS) scheme, which facilitates collaboration for robust communication and is energy efficient. In the EBS, nodes communication unit remains in sleeping mode and are awake just to communicate. The EBS scheme is fully decentralized, that is, nodes coordinate their wake-up window in partially overlapped manner within each duty-cycle to avoid message collisions. We show the theoretical convergence behavior of the scheme, which is confirmed through real test-bed experimentation. | Both distributed coordination and local synchronization are used to increase stability and facilitate broadcast message transfer @cite_45 @cite_32 @cite_4 . Examples of such schemes include gradient-based @cite_37 and bio-inspired algorithms @cite_5 . Typically bio-inspired algorithms have higher overheads than gradient-based algorithms @cite_41 @cite_42 @cite_0 . However, Gradient-based schemes can be ridged and unable to cope with the dynamism such as node failures @cite_13 . On the other hand, the emergent nature of firefly-based synchronization can cope with failure. Additionally, when a failure happens in another part of the network firefly-based synchronization does not affect the local cluster. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_41",
"@cite_42",
"@cite_32",
"@cite_0",
"@cite_45",
"@cite_5",
"@cite_13"
],
"mid": [
"2171436899",
"2142332462",
"",
"1116550701",
"130696423",
"2293123809",
"156208468",
"",
"1982567271"
],
"abstract": [
"Accurately synchronized clocks are crucial for many applications in sensor networks. Existing time synchronization algorithms provide on average good synchronization between arbitrary nodes, however, as we show in this paper, close-by nodes in a network may be synchronized poorly. We propose the Gradient Time Synchronization Protocol (GTSP) which is designed to provide accurately synchronized clocks between neighbors. GTSP works in a completely decentralized fashion: Every node periodically broadcasts its time information. Synchronization messages received from direct neighbors are used to calibrate the logical clock. The algorithm requires neither a tree topology nor a reference node, which makes it robust against link and node failures. The protocol is implemented on the Mica2 platform using TinyOS. We present an evaluation of GTSP on a 20-node testbed setup and simulations on larger network topologies.",
"Industrial wireless sensor network (IWSN) is a key enabling technology for the Internet-of-things (IoT). IWSN acts as one of the fundamental elements of the IoT infrastructure to bridge the physical sensors and actuators in field and backbone systems in the Internet. For deterministic performances, all mainstream IWSN standards utilize the slotted media access control (MAC) where the communication is allocated based on the superframe that comprises a number of slots in either contention-based access or contention-free access modes. In this paper, the planning of the superframe structure of the slotted MAC is investigated by two means: 1) a mathematical model of the MAC access latency based on the queue theory; and 2) an easy-to-use software tool based on packet-level simulation. The mathematical model gives an overall estimation of the average MAC access latency of the whole network. The software tool gives the exact latency of each packet and then can derive the optimal superframe structure of the network. The two means are validated correspondingly. With the methods proposed in this paper, IWSN designers can minimize the MAC access latency while satisfying the requirements at different generating rates of packet, number of nodes in the network, and packet buffer length of each node.",
"",
"People have always tried to understand natural phenomena. In computer science natural phenomena are mostly used as a source of inspiration for solving various problems in distributed systems such as optimization, clustering, and data processing. In this paper we will give an overview of research in field of computer science where fireflies in nature are used as role models for time synchronization. We will compare two models of oscillators that explain firefly synchronization along with other phenomena of synchrony in nature (e.g., synchronization of pacemaker cells of the heart and synchronization of neuron networks of the circadian pacemaker). Afterwards, we will present Mirollo and Strogatz's pulse coupled oscillator model together with its limitations. As discussed by the authors of the model, this model lacks of explanation what happens when oscillators are nonidentical. It also does not support mobile and faulty oscillators. Finally, it does not take into consideration that in communication among oscillators there are communication delays. Since these limitations prevent Mirollo and Strogatz's model to be used in real-world environments (such as Machine-to-Machine systems), we will sum up related work in which scholars investigated how to modify the model in order for it to be applicable in distributed systems. However, one has to bear in mind that there are usually large differences between mathematical models in theory and their implementation in practice. Therefore, we give an overview of both mathematical models and mechanisms in distributed systems that were designed after them.",
"Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.",
"Fraglets represent an execution model for communication protocols that resembles the chemical reactions in living organisms. The strong connection between their way of transforming and reacting and formal rewriting systems makes a fraglet program amenable to automatic verification. Grounded on past work, this paper investigates feasibility of adopting fraglets as model for specifying security protocols and analysing their properties. In particular, we give concrete sample analyses over a secure RFID protocol, showing evolution of the protocol run as chemical dynamics and simulating an adversary trying to circumvent the intended steps. The results of our analysis confirm the effectiveness of the cryptofraglets framework for the model and analysis of security properties and eventually show its potential to identify and uncover protocol flaws.",
"Traditionally, synchronization barriers ensure that no cooperating process advances beyond a specified point until all processes have reached that point. In heterogeneous large-scale distributed computing environments, with unreliable network links and machines that may become overloaded and unresponsive, traditional barrier semantics are too strict to be effective for a range of emerging applications. In this paper, we explore several relaxations, and introduce a partial barrier, a synchronization primitive designed to enhance liveness in loosely coupled networked systems. Partial barriers are robust to variable network conditions; rather than attempting to hide the asynchrony inherent to wide-area settings, they enable appropriate application-level responses. We evaluate the improved performance of partial barriers by integrating them into three publicly available distributed applications running across PlanetLab. Further, we show how partial barriers simplify a re-implementation of MapReduce that targets wide-area environments.",
"",
"Decentralized synchronization requires cooperation among network participants so that all nodes agree on a common reference timing. What happens if one node does not follow local synchronization rules and randomly transmits? This paper studies the resilience of two classes of decentralized slot synchronization against random disturbances, and quantifies the impact of the random behavior. It is shown that the coupling strength is a key factor for resilience, and that the synchronization approach based on the theory of coupled oscillators generally behaves better and is more robust than the approach that updates clocks based on the average neighboring timings."
]
} |
1707.09315 | 2741803124 | In recent years, the networks of low-power devices have gained popularity. Typically these devices are wireless and interact to form large networks such as the Machine to Machine (M2M) networks, Internet of Things (IoT), Wearable Computing, and Wireless Sensor Networks. The collaboration among these devices is a key to achieving the full potential of these networks. A major problem in this field is to guarantee robust communication between elements while keeping the whole network energy efficient. In this paper, we introduce an extended and improved emergent broadcast slot (EBS) scheme, which facilitates collaboration for robust communication and is energy efficient. In the EBS, nodes communication unit remains in sleeping mode and are awake just to communicate. The EBS scheme is fully decentralized, that is, nodes coordinate their wake-up window in partially overlapped manner within each duty-cycle to avoid message collisions. We show the theoretical convergence behavior of the scheme, which is confirmed through real test-bed experimentation. | The pulse-coupled oscillators (PCO) model is used to capture the synchronization behavior observed in fireflies flashing in unison @cite_20 . In this model, each firefly is described as a periodic oscillator running at its natural frequency -- the frequency defines the number of times the firefly flashes in a unit time. And, the period defines the time interval between two consecutive flashes. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2154953441"
],
"abstract": [
"A simple model for synchronous firing of biological oscillators based on Peskin's model of the cardiac pacemaker (Mathematical aspects of heart physiology, Courant Institute of Mathematical Sciences, New York University, New York, 1975, pp. 268-278) is studied. The model consists of a population of identical integrate-and-fire oscillators. The coupling between oscillators is pulsatile: when a given oscillator fires, it pulls the others up by a fixed amount, or brings them to the firing threshold, whichever is less. The main result is that for almost all initial conditions, the population evolves to a state in which all the oscillators are firing synchronously. The relationship between the model and real communities of biological oscillators is discussed; examples include populations of synchronously flashing fireflies, crickets that chirp in unison, electrically synchronous pacemaker cells, and groups of women whose menstrual cycles become mutually synchronized."
]
} |
1707.09102 | 2742056655 | When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space than the source domain used for pre-training (e.g. ImageNet), the fine-tuned network is likely to be over-parameterized. However, applying network pruning as a post-processing step to reduce the memory requirements has drawbacks: fine-tuning and pruning are performed independently; pruning parameters are set once and cannot adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations. We propose a principled method for jointly fine-tuning and compressing a pre-trained convolutional network that overcomes these limitations. Experiments on two specialized image domains (remote sensing images and describable textures) demonstrate the validity of the proposed approach. | Network pruning refers to the process of reducing the number of weights (connections) in a pre-trained neural network. The motivation behind this process is to make neural networks more compact and energy efficient for operation on resource constrained devices such as mobile phones. Network pruning can also improve network generalization by reducing overfitting. The earliest methods @cite_14 @cite_2 prune weights based on the second-order derivatives of the network loss. Data-free parameter pruning @cite_6 provides a data-independent method for discovering and removing entire neurons from the network. Deep compression @cite_7 integrates the complementary techniques of weight pruning, scalar quantization to encode the remaining weights with fewer bits, and Huffman coding. Dynamic network surgergy @cite_38 iteratively prunes and splices network weights. The novel splicing operation allows previously pruned weights to be reintroduced. Weights are pruned or spliced based on thresholding their absolute value. All weights, including pruned ones, are updated during backpropagation. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_7",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2125389748",
"2119144962",
"992687842",
"2114766824"
],
"abstract": [
"",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85 of the total parameters in an MNIST-trained network, and about 35 for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."
]
} |
1707.09102 | 2742056655 | When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space than the source domain used for pre-training (e.g. ImageNet), the fine-tuned network is likely to be over-parameterized. However, applying network pruning as a post-processing step to reduce the memory requirements has drawbacks: fine-tuning and pruning are performed independently; pruning parameters are set once and cannot adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations. We propose a principled method for jointly fine-tuning and compressing a pre-trained convolutional network that overcomes these limitations. Experiments on two specialized image domains (remote sensing images and describable textures) demonstrate the validity of the proposed approach. | Network pruning is one way to approach neural network compression. Other effective strategies include weight binarization @cite_32 @cite_21 , architectural improvements @cite_25 , weight quantization @cite_7 , sparsity constraints @cite_27 @cite_22 , guided knowledge distillation @cite_37 @cite_29 , and replacement of fully connected layers with structured projections @cite_3 @cite_17 @cite_33 . Many of these network compression methods can train compact neural networks from scratch, or compress pre-trained networks for testing in the same domain. However, since they assume particular types of weights, mimic networks trained in the same domain, or modify the network structure, most of these methods are not easily extended to the task of fine-tuning a pre-trained network to a specialized domain. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_7",
"@cite_33",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_25",
"@cite_17"
],
"mid": [
"1821462560",
"2520760693",
"2119144962",
"2949825606",
"1690739335",
"",
"2963114950",
"2949560654",
"566555209",
"2279098554",
""
],
"abstract": [
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only (30 , ) of the original neurons without any degradation of the top-1 classification accuracy.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"The fully connected layers of a deep convolutional neural network typically contain over 90 of the network parameters, and consume the majority of the memory required to store the network parameters. Reducing the number of parameters while preserving essentially the same predictive performance is critically important for operating deep neural networks in memory constrained environments such as GPUs or embedded devices. In this paper we show how kernel methods, in particular a single Fastfood layer, can be used to replace all fully connected layers in a deep convolutional neural network. This novel Fastfood layer is also end-to-end trainable in conjunction with convolutional layers, allowing us to combine them into a new architecture, named deep fried convolutional networks, which substantially reduces the memory footprint of convolutional networks trained on MNIST and ImageNet with no drop in predictive performance.",
"While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.",
"",
"Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d^2) to O(dlogd) and space complexity from O(d^2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90 of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.",
"We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.",
"Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).",
""
]
} |
1707.09068 | 2742044963 | Tartan TRT a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1.90x without any loss in accuracy while it is 1.17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1 relative loss in accuracy is acceptable, TRT is on average 2.04x faster and 1.25x more energy efficient than the bit-parallel accelerator. This revision includes post-layout results and a better configuration that processes 2bits at time resulting in better efficiency and lower area overhead. | In general, hardware acceleration for DNNs has recently progressed in two directions: 1) considering more general purpose accelerators that can support additional machine learning algorithms, and 2) considering further improvements primarily for convolutional neural networks and the two most dominant in terms of execution time layer types: convolutional and fully-connected. In the first category there are accelerators such as Cambricon @cite_8 and Cambricon-X @cite_21 . While targeting support for more machine learning algorithms is desirable, work on further optimizing performance for specific algorithms such as is valuable and needs to be pursued as it will affect future iterations of such general purpose accelerators. | {
"cite_N": [
"@cite_21",
"@cite_8"
],
"mid": [
"2565851976",
"2515287984"
],
"abstract": [
"Neural networks (NNs) have been demonstrated to be useful in a broad range of applications such as image recognition, automatic translation and advertisement recommendation. State-of-the-art NNs are known to be both computationally and memory intensive, due to the ever-increasing deep structure, i.e., multiple layers with massive neurons and connections (i.e., synapses). Sparse neural networks have emerged as an effective solution to reduce the amount of computation and memory required. Though existing NN accelerators are able to efficiently process dense and regular networks, they cannot benefit from the reduction of synaptic weights. In this paper, we propose a novel accelerator, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency. The proposed accelerator features a PE-based architecture consisting of multiple Processing Elements (PE). An Indexing Module (IM) efficiently selects and transfers needed neurons to connected PEs with reduced bandwidth requirement, while each PE stores irregular and compressed synapses for local computation in an asynchronous fashion. With 16 PEs, our accelerator is able to achieve at most 544 GOP s in a small form factor (6.38 mm2 and 954 mW at 65 nm). Experimental results over a number of representative sparse networks show that our accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the state-of-the-art NN accelerator.",
"Neural Networks (NN) are a family of models for a broad range of emerging machine learning and pattern recondition applications. NN techniques are conventionally executed on general-purpose processors (such as CPU and GPGPU), which are usually not energy-efficient since they invest excessive hardware resources to flexibly support various workloads. Consequently, application-specific hardware accelerators for neural networks have been proposed recently to improve the energy-efficiency. However, such accelerators were designed for a small set of NN techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks of an NN (such as layers), or even an NN as a whole. Although straightforward and easy-to-implement for a limited set of similar NN techniques, the lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different NN techniques with sufficient flexibility and efficiency. In this paper, we propose a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. Our evaluation over a total of ten representative yet distinct NN techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of NN techniques, and provides higher code density than general-purpose ISAs such as ×86, MIPS, and GPGPU. Compared to the latest state-of-the-art NN accelerator design DaDianNao [5] (which can only accommodate 3 types of NN techniques), our Cambricon-based accelerator prototype implemented in TSMC 65nm technology incurs only negligible latency power area overheads, with a versatile coverage of 10 different NN benchmarks."
]
} |
1707.09068 | 2742044963 | Tartan TRT a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1.90x without any loss in accuracy while it is 1.17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1 relative loss in accuracy is acceptable, TRT is on average 2.04x faster and 1.25x more energy efficient than the bit-parallel accelerator. This revision includes post-layout results and a better configuration that processes 2bits at time resulting in better efficiency and lower area overhead. | is closely related to @cite_13 @cite_14 whose execution time scales with precision but only for CVLs. does not improve performance for FCLs. improves upon by enabling: 1) performance improvements for FCLs, and 2) slicing the activation computation across multiple SIPs thus preventing under-utilization for layers with fewer than 4K outputs. uses a similar in spirit organization to but its performance on CVLs depends only on the number of activation bits that are 1 @cite_2 . It should be possible to apply the extensions to Pragmatic, however, performance in FCLs will still be dictated by weight precision. The area and energy overheads would need to be amortized by a commensurate performance improvement necessitating a dedicated evaluation study. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_2"
],
"mid": [
"",
"2563587242",
"2952901036"
],
"abstract": [
"",
"Motivated by the variance in the numerical precision requirements of Deep Neural Networks (DNNs) [1], [2], Stripes (STR), a hardware accelerator is presented whose execution time scales almost proportionally with the length of the numerical representation used. STR relies on bit-serial compute units and on the parallelism that is naturally present within DNNs to improve performance and energy with no accuracy loss. In addition, STR provides a new degree of adaptivity enabling on-the-fly trade-offs among accuracy, performance, and energy. Experimental measurements over a set of DNNs for image classification show that STR improves performance over a state-of-the-art accelerator [3] from 1.30x to 4.51x and by 1.92x on average with no accuracy loss. STR is 57 more energy efficient than the baseline at a cost of 32 additional area. Additionally, by enabling configurable, per-layer and per-bit precision control, STR allows the user to trade accuracy for further speedup and energy efficiency.",
"We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragmatic (PRA), an architecture that exploits it improving performance and energy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms, that is, products of the multiplicand and powers of two, which added together produce the final product [1]. At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non-zero terms using a) on-the-fly conversion of the multiplicator representation into an explicit list of powers of two, and b) hybrid bit-parallel multplicand bit-serial multiplicator processing units. PRA exploits two sources of ineffectual computations: 1) the aforementioned zero product terms which are the result of the lack of explicitness in the multiplicator representation, and 2) the excess in the representation precision used for both multiplicants and multiplicators, e.g., [2]. Measurements demonstrate that for the convolutional layers, a straightforward variant of PRA improves performance by 2.6x over the DaDiaNao (DaDN) accelerator [3] and by 1.4x over STR [4]. Similarly, PRA improves energy efficiency by 28 and 10 on average compared to DaDN and STR. An improved cross lane synchronication scheme boosts performance improvements to 3.1x over DaDN. Finally, Pragmatic benefits persist even with an 8-bit quantized representation [5]."
]
} |
1707.09068 | 2742044963 | Tartan TRT a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1.90x without any loss in accuracy while it is 1.17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1 relative loss in accuracy is acceptable, TRT is on average 2.04x faster and 1.25x more energy efficient than the bit-parallel accelerator. This revision includes post-layout results and a better configuration that processes 2bits at time resulting in better efficiency and lower area overhead. | The (EIE) uses synapse pruning, weight compression, zero activation elimination, and network retraining to drastically reduce the amount of computation and data communication when processing fully-connected layers @cite_15 . An appropriately configured EIE will outperform for FCLs, provided that the network is pruned and retrained. However, the two approaches attack a different component of FCL processing and there should be synergy between them. Specifically, EIE currently does not exploit the per layer precision variability of DNNs and relies on retraining the network. It would be interesting to study how EIE would benefit from a -like compute engine where EIE's data compression and pruning is used to create vectors of weights and activations to be processed in parallel. EIE uses single-lane units whereas uses a coarser-grain lane arrangement and thus would be prone to more imbalance. A middle ground may be able to offer some performance improvement while compensating for cross-lane imbalance. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2285660444"
],
"abstract": [
"State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency."
]
} |
1707.08608 | 2888689907 | Practitioners apply neural networks to increasingly complex problems in natural language processing, such as syntactic parsing and semantic role labeling that have rich output structures. Many such structured-prediction problems require deterministic constraints on the output values; for example, in sequence-to-sequence syntactic parsing, we require that the sequential outputs encode valid trees. While hidden units might capture such properties, the network is not always able to learn such constraints from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing rule-based post-processing or expensive discrete search. Instead, in the spirit of gradient-based training, we enforce constraints with gradient-based inference (GBI): for each input at test-time, we nudge continuous model weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We study the efficacy of GBI on three tasks with hard constraints: semantic role labeling, syntactic parsing, and sequence transduction. In each case, the algorithm not only satisfies constraints but improves accuracy, even when the underlying network is state-of-the-art. | Finally, as previously mentioned, our method highly resembles dual decomposition and more generally Lagrangian relaxation for structured prediction @cite_14 @cite_1 @cite_0 . In such techniques, it is assumed that a computationally efficient inference algorithm can maximize over a superset of the feasible region (this assumption parallels our case because unconstrained inference in the neural network is efficient, but might violate constraints). Then, the method employs gradient descent to concentrate this superset onto the feasible region. However, these techniques are not directly applicable to our non-linear problem with global constraints. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_1"
],
"mid": [
"2132678463",
"",
"1812474519"
],
"abstract": [
"Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.",
"",
"This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger."
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | @cite_14 gave some tight bounds on the expectation of the @math order statistic given the first and second moment information on @math real-valued rvs. We gave exact linear transformations for @math order statistic instead of bounds. @cite_2 proposed a dynamic programming algorithm to compute the order statistics of a family of correlated rvs whose distributions are not required to be identical. This algorithm relies on the existence of computing methods for the distributions of both minimum and maximum of target rvs. As a contrast, our work is more formal and easier to practice for the reveal of the closed-form linear transformation, and only relies on the existence of computing methods for the maximum's distribution. For @math fork-join queues under a Poisson job arrival process: 1) @cite_18 gave the queue length distribution for exponential queues in stable state; 2) Baccelli @cite_11 extended Flatto's work to queues with general service time distributions; 3) @cite_20 proposed the exact closed-form solution of the expected sojourn time for exponential queues in stable state. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"2073121338",
"2110959113",
"2142666223",
"1998066559",
"1554196368"
],
"abstract": [
"We consider the double queue that arises when arriving customers simultaneously place two demands handled independently by two servers. It is assumed that the customer arrivals form a Poisson process with mean 1, the servers have exponential service times with rate @math , and @math , which insures stability of the queue. Let @math be the respective lengths of the @math - and @math -queues, and @math at equilibrium. In a previous paper with the same title we obtained a formula for the generating function @math . We use this to derive the asymptotic behavior of @math as @math . The asymptotic results are employed to study the interdependence of @math . We derive limit laws for the expectation and distribution for either of these random variables conditioned on the other.",
"In this article, we study the problem of finding tight bounds on the expected value of the kth-order statistic E [Xk:n] under first and second moment information on n real-valued random variables. Given means E [Xi] = mi and variances Var[Xi] = σi2, we show that the tight upper bound on the expected value of the highest-order statistic E [Xn:n] can be computed with a bisection search algorithm. An extremal discrete distribution is identified that attains the bound, and two closed-form bounds are proposed. Under additional covariance information Cov[Xi,Xj] = Qij, we show that the tight upper bound on the expected value of the highest-order statistic can be computed with semidefinite optimization. We generalize these results to find bounds on the expected value of the kth-order statistic under mean and variance information. For k < n, this bound is shown to be tight under identical means and variances. All of our results are distribution-free with no explicit assumption of independence made. Particularly, using optimization methods, we develop tractable approaches to compute bounds on the expected value of order statistics.",
"Although order statistics have been studied for several decades, most of the results are based on the assumption of independent and identically distributed (i.i.d.) random variables. In the literature, how to compute the mth order statistics of n correlated random variables is still a problem. This article proposes a recursive algorithm based on statistical min max operations to compute order statistics for general correlated and not necessarily identically distributed random variables. The algorithm has an O(mn) time complexity and O(m p n) space complexity. A binary tree-based data structure is further developed to allow selective update of the order statistics with O(nm2) time. As a vehicle to demonstrate the algorithm, we apply it to the path selection algorithm in at-speed testing. A novel metric multilayer process space coverage metric is proposed to quantitatively gauge the quality of path selection. We then show that such a metric is directly linked to the order statistics, and our recursive algorithm can thus be applied. By employing a branch-and-bound path selection algorithm with these techniques, this article shows that selecting an optimal set of paths for a multimillion-gate design can be performed efficiently. Compared to the state of the art, experimental results show both the efficiency of our algorithms and better quality of our path selection.",
"An approximation technique, called scaling approximation, is introduced and applied to the analysis of homogeneous fork join queuing systems consisting of K>or=2 servers. The development of the scaling approximation technique is guided by both experimental and theoretical considerations. The approximation is based on the observation that there exist upper and lower bounds on the mean response time that grow at the same rate as a function of K. Simple, closed-form approximate expressions for the mean response time are derived and compared to simulation results. The relative error in the approximation is less than 5 for K >",
""
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | For @math exponential fork-join queues, the most influential approximation work was proposed by in 1988 @cite_20 , which is based on the fact that the sojourn times @math of sub-tasks @math are associated rvs @cite_7 , whose maximum can be bounded by the maximum of their iid equivalents. The lower bound is obtained by neglecting queueing effects. The approximation is a linear mixture of the upper and lower bounds. Parameters of the mixture are learned based on the mean sojourn times of jobs sampled from simulated basic fork-join queues. @cite_3 improved the lower bound by using an staging analysis technique @cite_16 based on the memory-less property of the exponential service time distribution, and use the mean value of Nelson's upper bound and the staging lower bound as the approximation. According to experiments in @cite_4 , Nelson's approximation is still the most reliable one for exponential queues, compared to following works including @cite_22 and @cite_3 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_16",
"@cite_20"
],
"mid": [
"618449037",
"1968846052",
"2013002076",
"2152474925",
"1992580876",
"1998066559"
],
"abstract": [
"Fork-join queueing networks model a network of parallel servers in which an arriving job splits into a number of subtasks that are serviced in parallel. Fork-join queues can be used to model disk arrays. A response time approximation of the fork-join queue is presented that attempts to comply with the additional constraints of modelling a disk array. This approximation is compared with existing analytical approximations of the fork-join queueing network.",
"We propose a family of heuristic approximations for the expected response time of K-dimensional symmetric Fork-Join systems in statistical equilibrium with general inter-arrival and service time distributions. To do this, we rely on the light traffic interpolation technique popularized by Reiman and Simon. Our starting point is a formula for the heavy traffic limit of two-dimensional Fork-Join queues that was obtained by the authors in [17,20]. By observing a fortuitous agreement between the light traffic derivative and the heavy traffic limit for this system under Markovian assumptions, we are able to obtain an approximation to the heavy traffic limit for K-dimensional systems with general inter-arrival and service distributions. By combining this heavy traffic limit with light traffic limits, we generate interpolation approximations for the Fork-Join queue, which agree extremely well with simulation results.",
"",
"Fork-join structures have gained increased importance in recent years as a means of modeling parallelism in computer and storage systems. The basic fork-join model is one in which a job arriving at a parallel system splits into K independent tasks that are assigned to K unique, homogeneous servers. In the paper, a simple response time approximation is derived for parallel systems with exponential service time distributions. The approximation holds for networks modeling several devices, both parallel and nonparallel. (In the case of closed networks containing a stand-alone parallel system, a mean response time bound is derived.) In addition, the response time approximation is extended to cover the more realistic case wherein a job splits into an arbitrary number of tasks upon arrival at a parallel system. Simulation results for closed networks with stand-alone parallel subsystems and exponential service time distributions indicate that the response time approximation is, on average, within 3 percent of the seeded response times. Similarly, simulation results with nonexponential distributions also indicate that the response time approximation is close to the seeded values. Potential applications of our results include the modeling of data placement in disk arrays and the execution of parallel programs in multiprocessor and distributed systems.",
"Probability and Statistics with Reliability, Queuing and Computer Science Applications, Second Edition, offers a comprehensive introduction to probabiliby, stochastic processes, and statistics for students of computer science, electrical and computer engineering, and applied mathematics. Its wealth of practical examples and up-to-date information makes it an excellent resource for practitioners as well.",
"An approximation technique, called scaling approximation, is introduced and applied to the analysis of homogeneous fork join queuing systems consisting of K>or=2 servers. The development of the scaling approximation technique is guided by both experimental and theoretical considerations. The approximation is based on the observation that there exist upper and lower bounds on the mean response time that grow at the same rate as a function of K. Simple, closed-form approximate expressions for the mean response time are derived and compared to simulation results. The relative error in the approximation is less than 5 for K >"
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | @cite_22 extended Nelson's approximation to general service time distributions using a light traffic interpolation technique. @cite_17 employed linear regression over the statistics of simulated fork-join jobs to find the parameters of their approximation equation for the expected sojourn time. However any change in service time distributions will require for re-simulations and re-regressions. Recently, @cite_12 proposed the first computable bounds on waiting and sojourn time of fork-join queues with general service time distributions by using martingales. However the upper bound is looser than Nelson's when it comes to the exponential service time distribution. @cite_9 considered the multi-stage nature of many fork-join queue networks, and proposed their end-to-end delay bounds. | {
"cite_N": [
"@cite_9",
"@cite_22",
"@cite_12",
"@cite_17"
],
"mid": [
"2963083228",
"1968846052",
"2015774390",
"2094799119"
],
"abstract": [
"Parallel systems have received increasing attention with numerous recent applications such as fork-join systems, load-balancing, and l-out-of-fc redundancy. Common to these systems is a join or resequencing stage, where tasks that have finished service may have to wait for the completion of other tasks so that they leave the system in a predefined order. These synchronization constraints make the analysis of parallel systems challenging and few explicit results are known. In this work, we model parallel systems using a max-plus approach that enables us to derive statistical bounds of waiting and sojourn times. Taking advantage of max-plus system theory, we also show end-to-end delay bounds for multi-stage fork-join networks. We contribute solutions for basic G|G|1 fork-join systems, parallel systems with load-balancing, as well as general (k, l) fork-join systems with redundancy. Our results provide insights into the respective advantages of l-out-of-k redundancy vs. load-balancing.",
"We propose a family of heuristic approximations for the expected response time of K-dimensional symmetric Fork-Join systems in statistical equilibrium with general inter-arrival and service time distributions. To do this, we rely on the light traffic interpolation technique popularized by Reiman and Simon. Our starting point is a formula for the heavy traffic limit of two-dimensional Fork-Join queues that was obtained by the authors in [17,20]. By observing a fortuitous agreement between the light traffic derivative and the heavy traffic limit for this system under Markovian assumptions, we are able to obtain an approximation to the heavy traffic limit for K-dimensional systems with general inter-arrival and service distributions. By combining this heavy traffic limit with light traffic limits, we generate interpolation approximations for the Fork-Join queue, which agree extremely well with simulation results.",
"In a Fork-Join (FJ) queueing system an upstream fork station splits incoming jobs into N tasks to be further processed by N parallel servers, each with its own queue; the response time of one job is determined, at a downstream join station, by the maximum of the corresponding tasks' response times. This queueing system is useful to the modelling of multi-service systems subject to synchronization constraints, such as MapReduce clusters or multipath routing. Despite their apparent simplicity, FJ systems are hard to analyze. This paper provides the first computable stochastic bounds on the waiting and response time distributions in FJ systems. We consider four practical scenarios by combining 1a) renewal and 1b) non-renewal arrivals, and 2a) non-blocking and 2b) blocking servers. In the case of non blocking servers we prove that delays scale as O(logN), a law which is known for first moments under renewal input only. In the case of blocking servers, we prove that the same factor of log N dictates the stability region of the system. Simulation results indicate that our bounds are tight, especially at high utilizations, in all four scenarios. A remarkable insight gained from our results is that, at moderate to high utilizations, multipath routing 'makes sense' from a queueing perspective for two paths only, i.e., response times drop the most when N = 2; the technical explanation is that the resequencing (delay) price starts to quickly dominate the tempting gain due to multipath transmissions.",
"Approximation techniques are developed to evaluate the performance of symmetric fork-join synchronization delays for K M G 1 queues. For a server utilization spl rho , the mean response time for fork-join requests is expressed as the sum of the mean response time at one of the queues and the mean synchronization delay as follows: R sub K sup F 1 ( spl rho )=R sub 1 ( spl rho )+F sub K spl alpha sub K ( spl rho ) spl sigma sub 1 ( spl rho ), where F sub K is obtained from the previous equation at spl rho =0, R sub 1 ( spl rho ) and spl sigma sub 1 ( spl rho ) are the mean and the standard deviation of response time at any one of the queues, respectively, and spl alpha sub K ( spl rho ) is a low-degree service-time distribution dependent polynomial in spl rho , whose coefficients are determined from simulation results. We also use simulation results to show that when fork-join requests share the servers with local requests, a good approximation (and an upper bound) to the fork-join response time is obtained by treating the components of fork-join response time as independent, i.e., the mean fork-join response time can be approximated by the expected value of the maximum of the response times at the K queues."
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | We refer readers to @cite_6 for a more comprehensive survey on fork-join queuing systems. To conclude, our work is orthogonal to existing approximation methods for basic fork-join queues. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2029718725"
],
"abstract": [
"Fork join (F J) requests arise in contexts such as parallel computing, query processing in parallel databases, and parallel disk access in RAID. F J requests spawn K tasks that are sent to K parallel servers, and the completion of all K tasks marks the completion of an F J request. The exact formula for the mean response time of K e 2-way F J requests derived under Markovian assumptions (RF J2) served as the starting point for an approximate expression for RF JK for 2"
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | There are some exact quantity analyses @cite_15 @cite_13 for purging @math fork-join queues, as it is equivalent to a single queue with @math times the service rate. @cite_15 gave comprehensive research on purging @math fork-join queues, with considerations on multi-class jobs, interferences from un-forked jobs and heterogeneous service time distributions. @cite_13 take the purging overheads into consideration, since the cancellation of running jobs typically incurs unnegligible delays in practice. | {
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"1983964956",
"2558228109"
],
"abstract": [
"Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to run a request on multiple servers and wait for the first completion (discarding all remaining copies of the request). However there is no exact analysis of systems with redundancy. This paper presents the first exact analysis of systems with redundancy. We allow for any number of classes of redundant requests, any number of classes of non-redundant requests, any degree of redundancy, and any number of heterogeneous servers. In all cases we derive the limiting distribution on the state of the system. In small (two or three server) systems, we derive simple forms for the distribution of response time of both the redundant classes and non-redundant classes, and we quantify the \"gain\" to redundant classes and \"pain\" to non-redundant classes caused by redundancy. We find some surprising results. First, the response time of a fully redundant class follows a simple Exponential distribution and that of the non-redundant class follows a Generalized Hyperexponential. Second, fully redundant classes are \"immune\" to any pain caused by other classes becoming redundant. We also compare redundancy with other approaches for reducing latency, such as optimal probabilistic splitting of a class among servers (Opt-Split) and Join-the-Shortest-Queue (JSQ) routing of a class. We find that, in many cases, redundancy outperforms JSQ and Opt-Split with respect to overall response time, making it an attractive solution.",
"Reducing latency in distributed computing and data storage systems is gaining increasing importance. Several empirical works have reported on the efficacy of scheduling redundant requests in such systems. That is, one may reduce job latency by: 1) scheduling the same job at more than one server and 2) waiting only until the fastest of them responds. Several theoretical models have been proposed to explain the power of using redundant requests, and all of the existing results rely heavily on a common assumption: all redundant requests of a job can be immediately cancelled as soon as one of them is completed. We study how one should schedule redundant requests when such assumption does not hold. This is of great importance in practice, since cancellation of running jobs typically incurs non-negligible delays. In order to bridge the gap between the existing models and practice, we propose a new queueing model that captures such cancellation delays. We then find how one can schedule redundant requests to achieve the optimal average job latency under the new model. Our results show that even with a small cancellation overhead, the actual optimal scheduling policy differs significantly from the optimal scheduling policy when the overhead is zero. Furthermore, we study optimal dynamic scheduling policies, which appropriately schedule redundant requests based on the number of jobs in the system. Our analysis reveals that for the two-server case, the optimal dynamic scheduler can achieve 7 –16 lower average job latency, compared with the optimal static scheduler."
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | For purging @math fork-join queues, there are even no applicable approximations currently. @cite_5 extended the staging analysis to exponential @math fork-join queues to find the lower bounds. Bounds for queues with general service time distributions are given by @cite_5 and @cite_1 , by resorting the fork-join queue to the split-merge queue model, where all empty sub-queues are blocked until any @math sub-tasks of the current job are completed. As depicted in Fig. (a), the proposed upper bounds tend to be very rough when increasing @math or the load factor @math . | {
"cite_N": [
"@cite_5",
"@cite_1"
],
"mid": [
"2962941774",
"2963531643"
],
"abstract": [
"We study the fundamental trade-off between storage and content download time. We show that the download time can be significantly reduced by dividing the content into chunks, encoding it to add redundancy and then distributing it across multiple disks. We determine the download time for two content access models — the fountain and fork-join models that involve simultaneous content access, and individual access from enqueued user requests respectively. For the fountain model we explicitly characterize the download time, while in the fork-join model we derive the upper and lower bounds. Our results show that coding reduces download time, through the diversity of distributing the data across more disks, even for the total storage used.",
"In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps in understanding when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks and the time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e., log of the tail probability is convex), then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e., log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues."
]
} |
1707.08860 | 2741452876 | (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. This paper developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k + 1, k + 1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k. | A typical use case of non-purging @math fork-join queues is the writing process in Cassandra @cite_0 . @cite_9 gave non-asymptotic statistical bounds on the sojourn time of non-purging @math fork-join queues. As a contrast, we give proper approximations instead of bounds. | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2066799377",
"2963083228"
],
"abstract": [
"Inherent replica inconsistency refers to the difference among the replicas of the same logical data item in the write propagation process of a normally running distributed storage system. In this paper, we formalize the write propagation process model of Cassandra, a widely used NoSQL storage system. In the write propagation process we explore two queueing systems, sending task queues and mutation queues, which locate at each replica node and are determinants of the replica inconsistency. The departure time difference from the mutation queue is used as the measure of inconsistency between two replicas. Furthermore, Request Per Second (RPS) and Mutation Threads Number (MTN), which affect the inherent inconsistency, are discussed and the MTN adaptation algorithm is proposed. Finally, A Cassandra inconsistency measurement framework is implemented using the source instrumentation approach. The empirical results conform well with our proposed inconsistency measurement model.",
"Parallel systems have received increasing attention with numerous recent applications such as fork-join systems, load-balancing, and l-out-of-fc redundancy. Common to these systems is a join or resequencing stage, where tasks that have finished service may have to wait for the completion of other tasks so that they leave the system in a predefined order. These synchronization constraints make the analysis of parallel systems challenging and few explicit results are known. In this work, we model parallel systems using a max-plus approach that enables us to derive statistical bounds of waiting and sojourn times. Taking advantage of max-plus system theory, we also show end-to-end delay bounds for multi-stage fork-join networks. We contribute solutions for basic G|G|1 fork-join systems, parallel systems with load-balancing, as well as general (k, l) fork-join systems with redundancy. Our results provide insights into the respective advantages of l-out-of-k redundancy vs. load-balancing."
]
} |
1707.08776 | 2740724058 | We introduce an evolutionary stochastic-local-search (SLS) algorithm for addressing a generalized version of the so-called 1 V D R cutting-stock problem. Cutting-stock problems are encountered often in industrial environments and the ability to address them efficiently usually results in large economic benefits. Traditionally linear-programming-based techniques have been utilized to address such problems, however their flexibility might be limited when nonlinear constraints and objective functions are introduced. To this end, this paper proposes an evolutionary SLS algorithm for addressing one-dimensional cutting-stock problems. The contribution lies in the introduction of a flexible structural framework of the optimization that may accommodate a large family of diversification strategies including a novel parallel pattern appropriate for SLS algorithms (not necessarily restricted to cutting-stock problems). We finally demonstrate through experiments in a real-world manufacturing problem the benefit in cost reduction of the considered diversification strategies. | Due to the fact that many different types of cutting stock problems can be found in the literature, e.g. with respect to dimensionality, characteristics of large and small objects, shape of figures and so forth, it is highly desirable that the scientific community is using the same language and terminology when describing problems and the corresponding solution approaches. In @cite_36 a typology that covered the most important four characteristics at the time to classify optimization problems like the cutting stock problems was introduced. This typology was improved by @cite_25 . An overview of characteristics how to classify the problems in the literature can be found in @cite_24 . The problem can further be distinguished whether it is off-line (full knowledge of the input) or on-line (no knowledge of the next items) @cite_0 . | {
"cite_N": [
"@cite_36",
"@cite_0",
"@cite_25",
"@cite_24"
],
"mid": [
"1979415050",
"",
"2105175235",
"2049815971"
],
"abstract": [
"Abstract Cutting and packing problems appear under various names in literature, e.g. cutting stock or trim loss problem, bin or strip packing problem, vehicle, pallet or container loading problem, nesting problem, knapsack problem etc. The paper develops a consistent and systematic approach for a comprehensive typology integrating the various kinds of problems. The typology is founded on the basic logical structure of cutting and packing problems. The purpose is to unify the different use of notions in the literature and to concentrate further research on special types of problems.",
"",
"The number of publications in the area of Cutting and Packing (C&P) has increased considerably over the last two decades. The typology of C&P problems introduced by Dyckhoff [Dyckhoff, H., 1990. A typology of cutting and packing problems. European Journal of Operational Research 44, 145–159] initially provided an excellent instrument for the organisation and categorisation of existing and new literature. However, over the years also some deficiencies of this typology became evident, which created problems in dealing with recent developments and prevented it from being accepted more generally. In this paper, the authors present an improved typology, which is partially based on Dyckhoff’s original ideas, but introduces new categorisation criteria, which define problem categories different from those of Dyckhoff. Furthermore, a new, consistent system of names is suggested for these problem categories. Finally, the practicability of the new scheme is demonstrated by using it as a basis for a categorisation of the C&P literature from the years between 1995 and 2004. � 2006 Elsevier B.V. All rights reserved.",
"The assortment or catalog problem involves determining which of the possible set of sizes or qualities of some product should be stocked when it is not possible or desirable to stock all of them and substitution in one direction (larger for smaller or higher-quality for lower-quality) is possible at some cost. We review the work published on this topic over the last 50 years."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Imitation learning is primarily concerned with matching expert demonstrations. Our work combines imitation learning with learning from task rewards, so that the agent is able to improve upon the demonstrations it has seen. Imitation learning can be cast into a supervised learning problem (like classification) @cite_4 @cite_14 . One popular imitation learning algorithm is DAGGER @cite_12 which iteratively produces new policies based on polling the expert policy outside its original state space. This leads to no-regret over validation data in the online learning sense. DAGGER requires the expert to be available during training to provide additional feedback to the agent. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_12"
],
"mid": [
"2544683879",
"2167224731",
"1931877416"
],
"abstract": [
"Decision making in robotics often involves computing an optimal action for a given state, where the space of actions under consideration can potentially be large and state dependent. Many of these decision making problems can be naturally formalized in the multiclass classification framework, where actions are regarded as labels for states. One powerful approach to multiclass classification relies on learning a function that scores each action; action selection is done by returning the action with maximum score. In this work, we focus on two imitation learning problems in particular that arise in robotics. The first problem is footstep prediction for quadruped locomotion, in which the system predicts next footstep locations greedily given the current four-foot configuration of the robot over a terrain height map. The second problem is grasp prediction, in which the system must predict good grasps of complex free-form objects given an approach direction for a robotic hand. We present experimental results of applying a recently developed functional gradient technique for optimizing a structured margin formulation of the corresponding large non-linear multiclass classification problems.",
"ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.",
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Imitation can also been achieved through or . The main principle is to learn a cost or a reward function under which the demonstration data is optimal. For instance, in @cite_17 @cite_2 the inverse RL problem is cast into a two-player zero-sum game where one player chooses policies and the other chooses reward functions. However, it doesn't scale to continuous state-action spaces and requires knowledge of the dynamics. To address continuous state spaces and unknown dynamics, @cite_6 solve inverse RL by combining classification and regression. Yet it is restricted to discrete action spaces. Demonstrations have also been used for inverse optimal control in high-dimensional, continuous robotic control problems @cite_11 . However, these approaches only do imitation learning and do not allow for learning from task rewards. | {
"cite_N": [
"@cite_11",
"@cite_2",
"@cite_6",
"@cite_17"
],
"mid": [
"2290104316",
"2102847492",
"2116774898",
"2113023245"
],
"abstract": [
"Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.",
"In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP's true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear programming problem, and show that using an off-the-shelf LP solver to solve this problem results in a substantial improvement in running time over existing methods---up to two orders of magnitude faster in our experiments. Additionally, our approach produces stationary policies, while all existing methods for apprenticeship learning output policies that are \"mixed\", i.e. randomized combinations of stationary policies. The technique used is general enough to convert any mixed policy to a stationary policy.",
"This paper considers the Inverse Reinforcement Learning (IRL) problem, that is inferring a reward function for which a demonstrated expert policy is optimal. We propose to break the IRL problem down into two generic Supervised Learning steps: this is the Cascaded Supervised IRL (CSI) approach. A classification step that defines a score function is followed by a regression step providing a reward function. A theoretical analysis shows that the demonstrated expert policy is near-optimal for the computed reward function. Not needing to repeatedly solve a Markov Decision Process (MDP) and the ability to leverage existing techniques for classification and regression are two important advantages of the CSI approach. It is furthermore empirically demonstrated to compare positively to state-of-the-art approaches when using only transitions sampled according to the expert policy, up to the use of some heuristics. This is exemplified on two classical benchmarks (the mountain car problem and a highway driving simulator).",
"We study the problem of an apprentice learning to behave in an environment with an unknown reward function by observing the behavior of an expert. We follow on the work of Abbeel and Ng [1] who considered a framework in which the true reward function is assumed to be a linear combination of a set of known and observable features. We give a new algorithm that, like theirs, is guaranteed to learn a policy that is nearly as good as the expert's, given enough examples. However, unlike their algorithm, we show that ours may produce a policy that is substantially better than the expert's. Moreover, our algorithm is computationally faster, is easier to implement, and can be applied even in the absence of an expert. The method is based on a game-theoretic view of the problem, which leads naturally to a direct application of the multiplicative-weights algorithm of Freund and Schapire [2] for playing repeated matrix games. In addition to our formal presentation and analysis of the new algorithm, we sketch how the method can be applied when the transition function itself is unknown, and we provide an experimental demonstration of the algorithm on a toy video-game environment."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Guided Cost Learning (GCL) @cite_11 and Generative Adversarial Imitation Learning (GAIL) @cite_7 are the first efficient imitation learning algorithms to learn from high-dimensional inputs without knowledge of the dynamics and hand-crafted features. They have a very similar algorithmic structure which consists of matching the distribution of the expert trajectories. To do so, they simultaneously learn the reward and the policy that imitates the expert demonstrations. At each step, sampled trajectories of the current policy and the expert policy are used to produce a reward function. Then, this reward is (partially) optimized to produce an updated policy and so on. In GAIL, the reward is obtained from a network trained to discriminate between expert trajectories and (partial) trajectories sampled from a generator (the policy), which is itself trained by TRPO @cite_19 . In GCL, the reward is obtained by minimization of the Maximum Entropy IRL cost @cite_0 and one could use any RL algorithm procedure (DDPG, TRPO etc.) to optimize this reward. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_7",
"@cite_11"
],
"mid": [
"2098774185",
"2949608212",
"2434014514",
"2290104316"
],
"abstract": [
"Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories.",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.",
"Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Control in continuous state-action domains typically uses smooth shaped rewards that are designed to be amenable to classical analysis yielding closed-form solutions. Such requirements might be difficult to meet in real world applications. For instance, iterative Linear Quadratic Gaussian (iLQG) @cite_16 is a method for nonlinear stochastic systems where the dynamics is known and the reward has to be quadratic (and thus entails hand-crafted task designs). It uses iterative linearization of the dynamics around the current trajectory in order to obtain a noisy linear system (where the noise is a centered Gaussian) and where the reward constraints are quadratic. Then the algorithm uses the Ricatti family of equations to obtain locally linear optimal trajectories that improve on the current trajectory. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2167856595"
],
"abstract": [
"We present an iterative linear-quadratic-Gaussian method for locally-optimal feedback control of nonlinear stochastic systems subject to control constraints. Previously, similar methods have been restricted to deterministic unconstrained problems with quadratic costs. The new method constructs an affine feedback control law, obtained by minimizing a novel quadratic approximation to the optimal cost-to-go function. Global convergence is guaranteed through a Levenberg-Marquardt method; convergence in the vicinity of a local minimum is quadratic. Performance is illustrated on a limited-torque inverted pendulum problem, as well as a complex biomechanical control problem involving a stochastic model of the human arm, with 10 state dimensions and 6 muscle actuators. A Matlab implementation of the new algorithm is availabe at www.cogsci.ucsd.edu spl sim todorov."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Guided Policy Search @cite_5 aims at finding an optimal policy by decomposing the problem into three steps. First, it uses nominal or expert trajectories, obtained by previous interactions with the environment to learn locally linear approximations of its dynamics. Then, it uses optimal control algorithms such as iLQG or DDP to find the locally linear optimal policies corresponding to these dynamics. Finally, via supervised learning, a neural network is trained to fit the trajectories generated by these policies. Here again, there is a quadratic constraint on the reward that must be purposely shaped. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2104733512"
],
"abstract": [
"Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | Normalized Advantage Functions (NAF) @cite_18 with model-based acceleration is a model-free RL algorithm using imagination rollouts coming from a model learned with the previous interactions with the environment or via expert demonstrations. NAF is the natural extension of Q-Learning in the continuous case where the advantage function is parameterized as a quadratic function of non-linear state features. The uni-modal nature of this function allows the maximizing action for the Q-function to be obtained directly as the mean policy. This formulation makes the greedy step of Q-Learning tractable for continuous action domains. Then, similarly as GPS, locally linear approximations of the dynamics of the environment are learned and iLQG is used to produce model-guided rollouts to accelerate learning. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2950471160"
],
"abstract": [
"Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of model-free algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized adantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable."
]
} |
1707.08817 | 2741122588 | We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object. | The most similar work to ours is DQfD @cite_3 , which combines Deep Q Networks (DQN) @cite_13 with learning from demonstrations in a similar way to DDPGfD. It additionally adds a supervised loss to keep the agent close to the policy from the demonstrations. However DQfD is restricted to domains with discrete action spaces and is not applicable to robotics. | {
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"2145339207",
"2788862220"
],
"abstract": [
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN."
]
} |
1707.08945 | 2759471388 | Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier. | Different methods have been proposed to generate adversarial examples in the white-box setting, where the adversary has full access to the classifier @cite_38 @cite_24 @cite_40 @cite_20 @cite_9 @cite_21 @cite_33 . We focus on the white-box setting as well for two reasons: (1) In our chosen autonomous vehicle domain, an attacker can obtain a close approximation of the model by reverse engineering the vehicle's systems using model extraction attacks @cite_11 . (2) To develop a foundation for future defenses, we must assess the abilities of powerful adversaries, and this can be done in a white-box setting. Given that recent work has examined the black-box transferability of digital adversarial examples @cite_25 , physical black-box attacks may also be possible. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_40",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"1673923490",
"2460937040",
"2949152835",
"9657784",
"1945616565",
"2516574342",
"2408141691",
"2950159395",
"2461943168"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.",
"In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input @math and any target classification @math , it is possible to find a new input @math that is similar to @math but classified as @math . This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from @math to @math . In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with @math probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19 misclassification rate) and Google (88.94 ) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.",
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (\"predictive analytics\") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., \"steal\") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures."
]
} |
1707.08945 | 2759471388 | Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier. | Goodfellow al proposed the fast gradient method that applies a first-order approximation of the loss function to construct adversarial samples @cite_24 . Optimization based methods have also been proposed to create adversarial perturbations for targeted attacks @cite_40 @cite_32 . These methods contribute to understanding digital adversarial examples. By contrast, our work examines physical perturbations on real objects under varying environmental conditions. | {
"cite_N": [
"@cite_24",
"@cite_40",
"@cite_32"
],
"mid": [
"1945616565",
"2516574342",
"2950864148"
],
"abstract": [
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input @math and any target classification @math , it is possible to find a new input @math that is similar to @math but classified as @math . This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from @math to @math . In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with @math probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack this http URL, which is a black-box image classification system."
]
} |
1707.08945 | 2759471388 | Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier. | Concurrent to our work, This work appeared at arXiv on 30 Oct 2017. Athalye al improved upon their original attack, and created 3D-printed replicas of perturbed objects @cite_6 . The main intellectual differences include: (1) Athalye al only use a set of synthetic transformations during optimization, which can miss subtle physical effects, while our work samples from a distribution modeling both physical and synthetic transformations. (2) Our work modifies true-sized objects. Athalye al 3D-print small-scale replicas. (3) Our work simulates realistic testing conditions appropriate to the use-case at hand. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2736899637"
],
"abstract": [
"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world."
]
} |
1707.08945 | 2759471388 | Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier. | Sharif al attacked face recognition systems by printing adversarial perturbations on the frames of eyeglasses @cite_39 . Their work demonstrated successful physical attacks in relatively stable physical conditions with little variation in pose, distance angle from the camera, and lighting. This contributes an interesting understanding of physical examples in stable environments. However, environmental conditions can vary widely in general and can contribute to reducing the effectiveness of perturbations. Therefore, we choose the inherently unconstrained environment of road-sign classification. In our work, we explicitly design our perturbations to be effective in the presence of diverse physical-world conditions (specifically, large distances angles and resolution changes). | {
"cite_N": [
"@cite_39"
],
"mid": [
"2535873859"
],
"abstract": [
"Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection."
]
} |
1707.08945 | 2759471388 | Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier. | Finally, Lu al performed experiments with physical adversarial examples of road sign images against and show current detectors cannot be attacked @cite_23 . In this work, we focus on to demonstrate the physical attack effectiveness and to highlight their security vulnerability in the real world. Attacking detectors are out of the scope of this paper, though recent work has generated digital adversarial examples against detection segmentation algorithms @cite_7 @cite_28 @cite_17 , and our recent work has extended to attack the YOLO detector @cite_15 . | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_23",
"@cite_15",
"@cite_17"
],
"mid": [
"2950774971",
"2738841453",
"2734506812",
"2778115935",
"2609920186"
],
"abstract": [
"It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.",
"Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.",
"It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause a trained neural network model to misclassify it. Recently, it was shown that physical adversarial examples exist: printing perturbed images then taking pictures of them would still result in misclassification. This raises security and safety concerns. However, these experiments ignore a crucial property of physical objects: the camera can view objects from different distances and at different angles. In this paper, we show experiments that suggest that current constructions of physical adversarial examples do not disrupt object detection from a moving platform. Instead, a trained neural network classifies most of the pictures taken from different distances and angles of a perturbed image correctly. We believe this is because the adversarial property of the perturbation is sensitive to the scale at which the perturbed picture is viewed, so (for example) an autonomous car will misclassify a stop sign only from a small range of distances. Our work raises an important question: can one construct examples that are adversarial for many or most viewing conditions? If so, the construction should offer very significant insights into the internal representation of patterns by deep networks. If not, there is a good prospect that adversarial examples can be reduced to a curiosity with little practical impact.",
"Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.",
"While deep learning is remarkably successful on perceptual tasks, it was also shown to be vulnerable to adversarial perturbations of the input. These perturbations denote noise added to the input that was generated specifically to fool the system while being quasi-imperceptible for humans. More severely, there even exist universal perturbations that are input-agnostic but fool the network on the majority of inputs. While recent work has focused on image classification, this work proposes attacks against semantic image segmentation: we present an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output. We show empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs. Furthermore, we also show the existence of universal noise which removes a target class (e.g., all pedestrians) from the segmentation while leaving the segmentation mostly unchanged otherwise."
]
} |
1707.08783 | 2740643899 | In this work we analyze the performances of two of the most used word embeddings algorithms, skip-gram and continuous bag of words on Italian language. These algorithms have many hyper-parameter that have to be carefully tuned in order to obtain accurate word representation in vectorial space. We provide an accurate analysis and an evaluation, showing what are the best configuration of parameters for specific tasks. | However, these optimistic results are not confirmed by more recent studies. Indeed the performance of word embedding are not directly comparable in the accuracy test to those obtained in the English language. For example, combining the word embeddings in a dependency parser have not observed improvements over a baseline system not using such features. found a $ 47 Similarly, recent work that trained word embeddings on tweets have highlighted some criticalities. One of these aspects is how the morphology of a word is opaque to word embeddings. Indeed, the relatedness of the meaning of a lemma's different word forms, its different string representations, is not systematically encoded. This means that in morphologically rich languages with long-tailed frequency distributions, even some word embedding representations for word forms of common lemmata may become very poor @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"1938755728"
],
"abstract": [
"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information."
]
} |
1707.08616 | 2739490129 | In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, under ideal and non-ideal conditions. This evaluation shows that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping. | This work is primarily related to two bodies of artificial intelligence research: interactive machine learning and knowledge transfer in reinforcement learning. Interactive machine learning (IML) @cite_5 algorithms use knowledge provided by human teachers to help train machine learning models. This allows for human experts to help train intelligent agents, thus enabling these agents to learn faster than they would if left to learn on their own. Typically human teachers interact with the agent by providing either demonstrations of correct behavior @cite_2 or directly critique the agent's behavior @cite_6 @cite_12 @cite_14 . Our work seeks to improve upon these methods by enabling these algorithms to learn from natural language in addition to demonstrations or critique. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_12"
],
"mid": [
"2290053245",
"1977124739",
"",
"2053910308",
"2098441518"
],
"abstract": [
"In this work we evaluate the performance of a policy shaping algorithm using 26 human teachers. We examine if the algorithm is suitable for human-generated data on two different boards in a pac-man domain, comparing performance to an oracle that provides critique based on one known winning policy. Perhaps surprisingly, we show that the data generated by our 26 participants yields even better performance for the agent than data generated by the oracle. This might be because humans do not discourage exploring multiple winning policies. Additionally, we evaluate the impact of different verbal instructions, and different interpretations of silence, finding that the usefulness of data is affected both by what instructions is given to teachers, and how the data is interpreted.",
"In order to be useful in real-world situations, it is critical to allow non-technical users to train robots. Existing work has considered the problem of a robot or virtual agent learning behaviors from evaluative feedback provided by a human trainer. That work, however, has treated feedback as a numeric reward that the agent seeks to maximize, and has assumed that all trainers will provide feedback in the same way when teaching the same behavior. We report the results of a series of user studies that indicate human trainers use a variety of approaches to providing feedback in practice, which we describe as different “training strategies.” For example, users may not always give explicit feedback in response to an action, and may be more likely to provide explicit reward than explicit punishment, or vice versa. If the trainer is consistent in their strategy, then it may be possible to infer knowledge about the desired behavior from cases where no explicit feedback is provided. We discuss a probabilistic model of human-provided feedback that can be used to classify these different training strategies based on when the trainer chooses to provide explicit reward and or explicit punishment, and when they choose to provide no feedback. Additionally, we investigate how training strategies may change in response to the appearance of the learning agent. Ultimately, based on this work, we argue that learning agents designed to understand and adapt to different users' training strategies will allow more efficient and intuitive learning experiences.",
"",
"Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 gives a brief survey of the psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 is devoted to interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects in this domain.",
"A long term goal of Interactive Reinforcement Learning is to incorporate nonexpert human feedback to solve complex tasks. Some state-of-the-art methods have approached this problem by mapping human information to rewards and values and iterating over them to compute better control policies. In this paper we argue for an alternate, more effective characterization of human feedback: Policy Shaping. We introduce Advise, a Bayesian approach that attempts to maximize the information gained from human feedback by utilizing it as direct policy labels. We compare Advise to state-of-the-art approaches and show that it can outperform them and is robust to infrequent and inconsistent human feedback."
]
} |
1707.08616 | 2739490129 | In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, under ideal and non-ideal conditions. This evaluation shows that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping. | There has been other work on using natural language to augment machine learning algorithms. There has been much work done on using natural language instructions to help reinforcement learning agents complete tasks more efficiently. Early works in this area focused on learning mappings between these instructions and specific control sequences in learning environments @cite_10 @cite_3 @cite_7 . In this previous work, language is used mainly used to instruct how to complete a specific task in a specific environment. In other words, language and state are tightly coupled. The main way that our work differs from this work is that we are seeking to use language as an abstraction tool. Our work focuses on exploring how language can be used to help reinforcement learning agents transfer knowledge to unseen environments. | {
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_3"
],
"mid": [
"2122223050",
"46490633",
"2170329722"
],
"abstract": [
"In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains --- Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples.",
"As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment.",
"In this paper, we address the task of mapping high-level instructions to sequences of commands in an external environment. Processing these instructions is challenging---they posit goals to be achieved without specifying the steps required to complete them. We describe a method that fills in missing information using an automatically derived environment model that encodes states, transitions, and commands that cause these transitions to happen. We present an efficient approximate approach for learning this environment model as part of a policy-gradient reinforcement learning algorithm for text interpretation. This design enables learning for mapping high-level instructions, which previous statistical methods cannot handle."
]
} |
1707.08616 | 2739490129 | In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, under ideal and non-ideal conditions. This evaluation shows that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping. | More recent work has examined how language can help reinforcement learning agents in a more environment-agnostic way. For example, work has been done on using high-level task specifications to engineer environment-agnostic reward functions to improve learning @cite_15 . Also, techniques such as sentiment analysis have been used to bias agent exploration to improve learning in unseen environments @cite_11 . Most of these techniques, however, require additional information about the environment, such as descriptions of object types in the environment, that may not always be readily available. Our technique relaxes this requirement by using neural machine translation to learn relationships between natural language action state descriptions and parts of the state space. | {
"cite_N": [
"@cite_15",
"@cite_11"
],
"mid": [
"2277684984",
"2549514639"
],
"abstract": [
"As intelligent robots become more prevalent, methods to make interaction with the robots more accessible are increasingly important. Communicating the tasks that a person wants the robot to carry out via natural language, and training the robot to ground the natural language through demonstration, are especially appealing approaches for interaction, since they do not require a technical background. However, existing approaches map natural language commands to robot command languages that directly express the sequence of actions the robot should execute. This sequence is often specific to a particular situation and does not generalize to new situations. To address this problem, we present a system that grounds natural language commands into reward functions using demonstrations of different natural language commands being carried out in the environment. Because language is grounded to reward functions, rather than explicit actions that the robot can perform, commands can be high-level, carried out in novel environments autonomously, and even transferred to other robots with different action spaces. We demonstrate that our learned model can be both generalized to novel environments and transferred to a robot with a different action space than the action space used during training.",
"In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85 accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.