aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1708.06720 | 2749425057 | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. | Text line based methods directly estimate line models. These methods are widely adopted in the field of document analysis @cite_24 , where article layout provides strong priors. They are hard to be applied for non-document scenarios. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2220930568"
],
"abstract": [
"The baselines of a document page are a set of virtual horizontal and parallel lines, to which the printed contents of document, e.g., text lines, tables or inserted photos, are aligned. Accurate baseline extraction is of great importance in the geometric correction of curved document images. In this paper, we propose an efficient method for accurate extraction of these virtual visual cues from a curved document image. Our method comes from two basic observations that the baselines of documents do not intersect with each other and that within a narrow strip, the baselines can be well approximated by linear segments. Based upon these observations, we propose a curvilinear projection based method and model the estimation of curved baselines as a constrained sequential optimization problem. A dynamic programming algorithm is then developed to efficiently solve the problem. The proposed method can extract the complete baselines through each pixel of document images in a high accuracy. It is also scripts insensitive and highly robust to image noises, non-textual objects, image resolutions and image quality degradation like blurring and non-uniform illumination. Extensive experiments on a number of captured document images demonstrate the effectiveness of the proposed method."
]
} |
1708.06720 | 2749425057 | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. | Early component or word fragment based methods @cite_4 @cite_21 @cite_42 @cite_31 @cite_32 @cite_20 @cite_35 @cite_13 extract candidate text fragments by some manually designed features, e.g. MSER @cite_17 and SWT @cite_33 , and then determine whether the fragments are real text or not. These methods once led some popular competitions for well focused texts, e.g. ICDAR13 @cite_11 . However, the performance of these methods heavily degrades when applied to more challenging scenarios such as ICDAR15 @cite_26 where texts are captured accidentally. In addition, as long as some texts are missed by the manually designed features, they would never be recalled in the subsequent steps. | {
"cite_N": [
"@cite_35",
"@cite_11",
"@cite_4",
"@cite_33",
"@cite_26",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_31",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2065613686",
"",
"2128854450",
"2142159465",
"",
"2148214126",
"117491841",
"1972065312",
"1999284580",
"1935817682",
"2019478948",
"2078997308"
],
"abstract": [
"In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.",
"",
"In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"",
"Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76 , much better than the state-of-the-art performance of 71 . Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method.",
"Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and sliding-window based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78 in F-measure, which is significantly higher than previous methods.",
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"Text in an image provides vital information for interpreting its contents, and text in a scene can aid a variety of tasks from navigation to obstacle avoidance and odometry. Despite its value, however, detecting general text in images remains a challenging research problem. Motivated by the need to consider the widely varying forms of natural text, we propose a bottom-up approach to the problem, which reflects the characterness of an image region. In this sense, our approach mirrors the move from saliency detection methods to measures of objectness. In order to measure the characterness, we develop three novel cues that are tailored for character detection and a Bayesian method for their integration. Because text is made up of sets of characters, we then design a Markov random field model so as to exploit the inherent dependencies between characters. We experimentally demonstrate the effectiveness of our characterness cues as well as the advantage of Bayesian multicue integration. The proposed text detector outperforms state-of-the-art methods on a few benchmark scene text detection data sets. We also show that our measurement of characterness is superior than state-of-the-art saliency detection models when applied to the same task.",
"Recently, a variety of real-world applications have triggered huge demand for techniques that can extract textual information from natural scenes. Therefore, scene text detection and recognition have become active research topics in computer vision. In this work, we investigate the problem of scene text detection from an alternative perspective and propose a novel algorithm for it. Different from traditional methods, which mainly make use of the properties of single characters or strokes, the proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images. The experiments on the latest ICDAR benchmarks demonstrate that the proposed algorithm achieves state-of-the-art performance. Moreover, compared to conventional approaches, the proposed algorithm shows stronger adaptability to texts in challenging scenarios.",
"Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks, while most current research efforts only focus on horizontal or near horizontal scene text. In this paper, first we present a unified distance metric learning framework for adaptive hierarchical clustering, which can simultaneously learn similarity weights (to adaptively combine different feature similarities) and the clustering threshold (to automatically determine the number of clusters). Then, we propose an effective multi-orientation scene text detection system, which constructs text candidates by grouping characters based on this adaptive clustering. Our text candidates construction method consists of several sequential coarse-to-fine grouping steps: morphology-based grouping via single-link clustering, orientation-based grouping via divisive hierarchical clustering, and projection-based grouping also via divisive clustering. The effectiveness of our proposed system is evaluated on several public scene text databases, e.g., ICDAR Robust Reading Competition data sets (2011 and 2013), MSRA-TD500 and NEOCR. Specifically, on the multi-orientation text data set MSRA-TD500, the @math measure of our system is @math percent, much better than the state-of-the-art performance. We also construct and release a practical challenging multi-orientation scene text data set (USTB-SV1K), which is available at http: prir.ustb.edu.cn TexStar MOMV-text-detection .",
"Detecting text in natural images is an important prerequisite. In this paper, we propose a novel text detection algorithm, which employs edge-enhanced Maximally Stable Extremal Regions as basic letter candidates. These candidates are then filtered using geometric and stroke width information to exclude non-text objects. Letters are paired to identify text lines, which are subsequently separated into words. We evaluate our system using the ICDAR competition dataset and our mobile document database. The experimental results demonstrate the excellent performance of the proposed method."
]
} |
1708.06720 | 2749425057 | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. | Recently, some component based methods @cite_50 @cite_45 @cite_23 @cite_0 @cite_51 attempt to learn text components by CNN feature learning. The components are either representative pixels @cite_50 @cite_45 @cite_23 or segment boxes @cite_0 @cite_51 . These methods can learn from word annotations. In addition, text component is also a basic visual element, which may also benefit a common text detection engine. Nevertheless, our method takes advantages over these methods in the following aspects: first, characters provide stronger cues, e.g., character scales and center locations, for the subsequent text structure analysis module; second, character is a semantic element, while component not. Thus our method is applicable to problems where direct character recognition is needed, e.g. match expression; third, our method can utilize loose word annotations for training, e.g. bounding box annotations in the COCO-Text dataset @cite_27 . This is because our method can refine character center labels during training. For the above component based methods, their noisy labels are fixed which may harm training. | {
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_45",
"@cite_23",
"@cite_50",
"@cite_51"
],
"mid": [
"2519818067",
"2253806798",
"2464918637",
"2333563142",
"2952365771",
"2950143680"
],
"abstract": [
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .",
"This paper describes the COCO-Text dataset. In recent years large-scale datasets like SUN and Imagenet drove the advancement of scene understanding and object recognition. The goal of COCO-Text is to advance state-of-the-art in text detection and recognition in natural images. The dataset is based on the MS COCO dataset, which contains images of complex everyday scenes. The images were not collected with text in mind and thus contain a broad variety of text instances. To reflect the diversity of text in natural scenes, we annotate text with (a) location in terms of a bounding box, (b) fine-grained classification into machine printed text and handwritten text, (c) classification into legible and illegible text, (d) script of the text and (e) transcriptions of legible text. The dataset contains over 173k text annotations in over 63k images. We provide a statistical analysis of the accuracy of our annotations. In addition, we present an analysis of three leading state-of-the-art photo Optical Character Recognition (OCR) approaches on our dataset. While scene text detection and recognition enjoys strong advances in recent years, we identify significant shortcomings motivating future work.",
"Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text.",
"We introduce a new top-down pipeline for scene text detection. We propose a novel Cascaded Convolutional Text Network (CCTN) that joints two customized convolutional networks for coarse-to-fine text localization. The CCTN fast detects text regions roughly from a low-resolution image, and then accurately localizes text lines from each enlarged region. We cast previous character based detection into direct text region estimation, avoiding multiple bottom- up post-processing steps. It exhibits surprising robustness and discriminative power by considering whole text region as detection object which provides strong semantic information. We customize convolutional network by develop- ing rectangle convolutions and multiple in-network fusions. This enables it to handle multi-shape and multi-scale text efficiently. Furthermore, the CCTN is computationally efficient by sharing convolutional computations, and high-level property allows it to be invariant to various languages and multiple orientations. It achieves 0.84 and 0.86 F-measures on the ICDAR 2011 and ICDAR 2013, delivering substantial improvements over state-of-the-art results [23, 1].",
"In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.",
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese."
]
} |
1708.06720 | 2749425057 | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. | The fully convolution neural network is adopted, which has seen good performance on general object detection, e.g., SSD @cite_52 and DenseBox @cite_5 . Nevertheless, to be applied for characters, several factors need to be taken into account. First, characters may vary a lot in size on different images. Some characters may be very small, e.g., @math pixels in an 1M pixel image. Second, texts may appear in very different scenarios, such as captured documents, street scenes, advertising posters and etc, which makes the backgrounds distribute in a large space. | {
"cite_N": [
"@cite_5",
"@cite_52"
],
"mid": [
"2129987527",
"2193145675"
],
"abstract": [
"How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
} |
1708.06720 | 2749425057 | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. | To ease the background variation problem, we adopt a two-level hard negative example mining approach for training. First is online hard negative mining @cite_52 . All positives are used for loss computation. For negatives, only top scored ones are used in loss computation that the ratio between negatives and positives is at most @math . Second is hard patch mining. During training, we test all training images every @math iterations to find false positives (using the current character model). These false positives will be more likely sampled in the successive mini-batch sampling procedure. | {
"cite_N": [
"@cite_52"
],
"mid": [
"2193145675"
],
"abstract": [
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
} |
1708.06499 | 2749629079 | The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the objective function value of an equilibrium --- solution of the game --- and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games. In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones. | For the problems studied in the paper, we systematically strengthen natural LPs by the construction of the new configuration LPs presented in @cite_11 . propose a scheme that consists of solving the new LPs (with exponential number of variables) and rounding the fractional solutions to integer ones using decoupling inequalities for optimization problems. Instead of rounding techniques, we consider primal-dual approaches which are very adequate to studying game efficiency. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2009636784"
],
"abstract": [
"We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as xq, q age; 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is Aq, where Aq is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that ||a#x03A3;n i=1 Xi||q ale; Cq ||a#x03A3;n i=1 Yi||q, where Xi are independent nonnegative random variables, Yi are possibly dependent nonnegative random variable, and each Yi has the same distribution as Xi. The inequality was proved by de la Pea#x00F1;a in 1990. However, the optimal constant Cq was not known. We show that the optimal constant is Cq = Aq1 q."
]
} |
1708.06499 | 2749629079 | The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the objective function value of an equilibrium --- solution of the game --- and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games. In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones. | The smoothness framework has been introduced by . This simple, elegant framework gives tight bounds for many classes of games in complete-information settings including the celebrated atomic congestion games (and others in @cite_19 @cite_26 ). Similar notion, local-smoothness @cite_14 , inspired by the smooth argument has been used to study the PoA of splittable games in which players can split their flow to arbitrarily small amounts and route the amount in different manner. The local-smoothness is also powerful. It has been used to settle the PoA for a large class of cost functions in splittable games @cite_14 and in opinion formation games @cite_6 . | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_26",
"@cite_6"
],
"mid": [
"2294025081",
"2148042342",
"2019543382",
""
],
"abstract": [
"The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy.",
"Congestion games are multi-player games in which players' costs are additive over a set of resources that have anonymous cost functions, with pure strategies corresponding to certain subsets of resources. In a splittable congestion game, each player can choose a convex combination of subsets of resources. We characterize the worst-case price of anarchy — a quantitative measure of the inefficiency of equilibria — in splittable congestion games. Our approximation guarantee is parameterized by the set of allowable resource cost functions, and degrades with the “degree of nonlinearity” of these cost functions. We prove that our guarantee is the best possible for every set of cost functions that satisfies mild technical conditions. We prove our guarantee using a novel “local smoothness” proof framework, and as a consequence the guarantee applies not only to the Nash equilibria of splittable congestion games, but also to all correlated equilibria.",
"We characterize the Price of Anarchy (POA) in weighted congestion games, as a function of the allowable resource cost functions. Our results provide as thorough an understanding of this quantity as is already known for nonatomic and unweighted congestion games, and take the form of universal (cost function-independent) worst-case examples. One noteworthy by-product of our proofs is the fact that weighted congestion games are “tight,” which implies that the worst-case price of anarchy with respect to pure Nash equilibria, mixed Nash equilibria, correlated equilibria, and coarse correlated equilibria are always equal (under mild conditions on the allowable cost functions). Another is the fact that, like nonatomic but unlike atomic (unweighted) congestion games, weighted congestion games with trivial structure already realize the worst-case POA, at least for polynomial cost functions. We also prove a new result about unweighted congestion games: the worst-case price of anarchy in symmetric games is as large as in their more general asymmetric counterparts.",
""
]
} |
1708.06499 | 2749629079 | The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the objective function value of an equilibrium --- solution of the game --- and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games. In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones. | Linear programming (and mathematical programming in general) has been a powerful tool in the development of game theory. There is a vast literature on this subject and we can only mention the most closely related to the paper. One of the most interesting recent treatments on the role of linear programming in game theory is the book @cite_9 . revisited fundamental results in mechanism design in an elegant manner by the means of linear programming and its duality. It is surprising to see that many results have been shaped nicely by LPs. | {
"cite_N": [
"@cite_9"
],
"mid": [
"655257519"
],
"abstract": [
"1. Introduction 2. Arrow's theorem and its consequences 3. Network flow problem 4. Incentive compatibility 5. Efficiency 6. Revenue maximization 7. Rationalizability."
]
} |
1708.06678 | 2750471929 | We develop a new approach to learn the parameters of regression models with hidden variables. In a nutshell, we estimate the gradient of the regression function at a set of random points, and cluster the estimated gradients. The centers of the clusters are used as estimates for the parameters of hidden units. We justify this approach by studying a toy model, whereby the regression function is a linear combination of sigmoids. We prove that indeed the estimated gradients concentrate around the parameter vectors of the hidden units, and provide non-asymptotic bounds on the number of required samples. To the best of our knowledge, no comparable guarantees have been proven for linear combinations of sigmoids. | Typical approaches to learning mixtures, like EM come with no guarantees, and suffer from convergence to local minima. Providing guarantees for even the idealized case of learning mixtures of Gaussians is non-trivial, and has been the subject of several recent studies . There are relatively few rigorous results that guarantee learning for regression models with latent variables. @cite_3 consider mixtures of linear regressions. In this setting, they show that regressing the response from second and third order tensors of the covariates yields coefficients, also higher order tensors, whose decomposition reveals the model parameters. A different approach, relying only on the second-order tensor (i.e., the covariance) and alternating minimization is followed by @cite_7 , for a mixture composed of two linear models in the absence of noise; the same setting, in the presence of noise, is studied by @cite_10 . None of these approaches can be applied to our model: our components are non-linear (sigmoids), while the above works focus on linear components; moreover, both @cite_7 and @cite_10 limit their analysis to @math hidden units. | {
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_3"
],
"mid": [
"2964207716",
"2949960673",
"2953030775"
],
"abstract": [
"We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.",
"Mixed linear regression involves the recovery of two (or more) unknown vectors from unlabeled linear measurements; that is, where each sample comes from exactly one of the vectors, but we do not know which one. It is a classic problem, and the natural and empirically most popular approach to its solution has been the EM algorithm. As in other settings, this is prone to bad local minima; however, each iteration is very fast (alternating between guessing labels, and solving with those labels). In this paper we provide a new initialization procedure for EM, based on finding the leading two eigenvectors of an appropriate matrix. We then show that with this, a re-sampled version of the EM algorithm provably converges to the correct vectors, under natural assumptions on the sampling distribution, and with nearly optimal (unimprovable) sample complexity. This provides not only the first characterization of EM's performance, but also much lower sample complexity as compared to both standard (randomly initialized) EM, and other methods for this problem.",
"Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima. In this paper, we develop a new computationally efficient and provably consistent estimator for a mixture of linear regressions, a simple instance of a discriminative latent-variable model. Our approach relies on a low-rank linear regression to recover a symmetric tensor, which can be factorized into the parameters using a tensor power method. We prove rates of convergence for our estimator and provide an empirical evaluation illustrating its strengths relative to local optimization (EM)."
]
} |
1708.06520 | 2750125311 | The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are typically left without an effective user representation. In this paper, we present a new approach to model users through recurrent neural networks by sequentially processing consumed items, represented by any type of embeddings and other context features. This way we obtain semantically rich user representations, which capture a user’s musical taste over time. Our experimental analysis on large-scale user data shows that our model can be used to predict future songs a user will likely listen to, both in the short and long term. | Very recently there have been research efforts in using RNNs for item recommendation. @cite_11 use RNNs to recommend items by predicting the next item interaction. The authors use one-hot item encodings as input and produce scores for every item in the catalog, on which a ranking loss is defined. The task can thus be compared to a classification problem. For millions of items, this quickly leads to scalability issues, and the authors resort to popularity-based sampling schemes to resolve this. Such models typically take a long time to converge, and special care needs to be taken not to introduce a popularity bias, since popular items will occur more frequently in the training data. The work by @cite_23 is closely related to the previous approach, and they also state that making a prediction for each item in the catalog is slow and intractable for many items. Instead, low-dimensional item embeddings can be predicted at the output in a regression task, a notion we will extend on in Section . | {
"cite_N": [
"@cite_23",
"@cite_11"
],
"mid": [
"2953316038",
"2262817822"
],
"abstract": [
"Recurrent neural networks (RNNs) were recently proposed for the session-based recommendation task. The models showed promising improvements over traditional recommendation approaches. In this work, we further study RNN-based models for session-based recommendations. We propose the application of two techniques to improve model performance, namely, data augmentation, and a method to account for shifts in the input data distribution. We also empirically study the use of generalised distillation, and a novel alternative model that directly predicts item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate relative improvements of 12.8 and 14.8 over previously reported results on the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.",
"We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches."
]
} |
1708.06520 | 2750125311 | The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are typically left without an effective user representation. In this paper, we present a new approach to model users through recurrent neural networks by sequentially processing consumed items, represented by any type of embeddings and other context features. This way we obtain semantically rich user representations, which capture a user’s musical taste over time. Our experimental analysis on large-scale user data shows that our model can be used to predict future songs a user will likely listen to, both in the short and long term. | A popular method to learn item embeddings is the word2vec suite by @cite_1 with both Continuous Bag-of-Words and Skip-Gram variants. In this, a corpus of item lists is fed into the model, which learns distributed, low-dimensional vector embeddings for each item in the corpus. Word2vec and variants have already been applied to item recommendation, e.g. @cite_24 formulate a word2vec variant to learn item vectors in a set consumed by a user, Liang @cite_26 devise a word2vec-based CoFactor model that unifies both matrix factorization and item embedding learning, and Ozsoy @cite_13 learns embeddings for places visited by users on Foursquare to recommend new sites to visit. These works show that a word2vec-based recommender system can outperform traditional matrix factorization and collaborative filtering techniques on a variety of tasks. In the work by item embeddings are predicted and at the same time learned by the model itself, a practice that generally deteriorates the embedding quality: in the limit, the embeddings will all collapse to a degenerate, non-informative solution, since in this case the loss will be minimal. Also, they minimize the cosine distance during training, which we found to decrease performance a lot. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_1",
"@cite_13"
],
"mid": [
"2613693869",
"2508504774",
"2153579005",
"2222911438"
],
"abstract": [
"",
"Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"Social network platforms can use the data produced by their users to serve them better. One of the services these platforms provide is recommendation service. Recommendation systems can predict the future preferences of users using their past preferences. In the recommendation systems literature there are various techniques, such as neighborhood based methods, machine-learning based methods and matrix-factorization based methods. In this work, a set of well known methods from natural language processing domain, namely Word2Vec, is applied to recommendation systems domain. Unlike previous works that use Word2Vec for recommendation, this work uses non-textual features, the check-ins, and it recommends venues to visit check-in to the target users. For the experiments, a Foursquare check-in dataset is used. The results show that use of continuous vector space representations of items modeled by techniques of Word2Vec is promising for making recommendations."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | A related problem is to describe the remaining flexibility in a statistical shape model that is partially fixed. If the position of some points, curves or subset of the surface is known, the goal is to characterise the space of shapes that approximately fit these observations. @cite_17 show how to compute the subspace of faces with the same profile. @cite_5 extended this approach into a probabilistic setting. | {
"cite_N": [
"@cite_5",
"@cite_17"
],
"mid": [
"1711351391",
"1504795754"
],
"abstract": [
"Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.",
"The second edition of this reference provides an update on the best methods for the measurement of the surfaces of the head and neck. The newest techniques are explained and the latest normative data on age-related changes in measurements (i.e., population norms) are completely examined in nearly 20 chapters and six appendices. Topics covered include sources of error in anthropometry and anthroposcopy; age-related changes in selected measurements of the craniofacial complex; anthropometry of minor defects in the craniofacial complex; anthropometry in craniomaxillofacial surgery, genetics, and aesthetic surgery of the nose; and anthropometry of the attractive face. Other chapters discuss the reconstruction of a photograph of a missing child, medical photography in clinical practice, interpolation of growth curves, racial and ethnic morphometric differences in the craniofacial complex, and basic statistical methods in clinical research. Complementing the text throughout are detailed illustrations and highly informative tables."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | We emphasise that we study the ambiguities only in a monocular setting and, for the perspective case, assuming no geometric calibration. Multiview constraints would reduce or remove the ambiguity. For example, @cite_34 describe an algorithm for fitting a 3DMM to stereo face images. In this case, the stereo disparity cue used in their objective function conveys depth information which helps to resolve the ambiguity. However, note that even here, their solution is unstable when camera parameters are unknown. They introduce an additional heuristic constraint on the focal length, namely they restrict it to be between 1 and 5 times the sensor size. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2156736396"
],
"abstract": [
"We present a novel model based stereo system, which accurately extracts the 3D shape and pose of faces from multiple images taken simultaneously. Extracting the 3D shape from images is important in areas such as pose-invariant face recognition and image manipulation. The method is based on a 3D morphable face model learned from a database of facial scans. The use of a strong face prior allows us to extract high precision surfaces from stereo data of faces, where traditional correlation based stereo methods fail because of the mostly textureless input images. The method uses two or more uncalibrated images of arbitrary baseline, estimating calibration and shape simultaneously. Results using two and three input images are presented. We replace the lighting and albedo estimation of a monocular method with the use of stereo information, making the system more accurate and robust. We evaluate the method using ground truth data and the standard PIE image dataset. A comparison with the state of the art monocular system shows that the new method has a significantly higher accuracy."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | CNNs have also been used to directly estimate between a 3DMM and a 2D face image, without explicitly estimating 3DMM shape parameters or pose. Unlike landmarks, this correspondence is dense, providing a 2D location for every visible vertex. This was first proposed by @cite_11 who use a fully convolutional network and pose the continuous regression task as a coarse to fine classification task. @cite_18 take a similar approach but go further by using the correspondences to estimate 3D face shape by fitting a 3DMM. @cite_24 learn this fitting process as well. @cite_26 take a multitask learning approach by training a CNN to predict both correspondence and facial depth. In all cases, this estimated dense correspondence provides an ambiguous shape cue, exactly as we describe in this paper. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_26",
"@cite_11"
],
"mid": [
"2790940563",
"2753596648",
"2962780596",
"2964145484"
],
"abstract": [
"We present a robust method for estimating the facial pose and shape information from a densely annotated facial image. The method relies on Convolutional Point-set Representation (CPR), a carefully designed matrix representation to summarize different layers of information encoded in the set of detected points in the annotated image. The CPR disentangles the dependencies of shape and different pose parameters and enables updating different parameters in a sequential manner via convolutional neural networks and recurrent layers. When updating the pose parameters, we sample reprojection errors along with a predicted direction and update the parameters based on the pattern of reprojection errors. This technique boosts the model's capability in searching a local minimum under challenging scenarios. We also demonstrate that annotation from different sources can be merged under the framework of CPR and contributes to outperforming the current state-of-the-art solutions for 3D face alignment. Experiments indicate the proposed CPRFA (CPR-based Face Alignment) significantly improves 3D alignment accuracy when the densely annotated image contains noise and missing values, which is common under \"in-the-wild\" acquisition scenarios.",
"We present a minimalistic but effective neural network that computes dense facial correspondences in highly unconstrained RGB images. Our network learns a per-pixel flow and a matchability mask between 2D input photographs of a person and the projection of a textured 3D face model. To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions. We found that a training refinement using real photographs is required to drastically improve the ability to handle real images. When combined with a facial detection and 3D face fitting step, we show that our approach outperforms the state-of-the-art face alignment methods in terms of accuracy and speed. By directly estimating dense correspondences, we do not rely on the full visibility of sparse facial landmarks and are not limited to the model space of regression-based approaches. We also assess our method on video frames and demonstrate successful per-frame processing under extreme pose variations, occlusions, and lighting conditions. Compared to existing 3D facial tracking techniques, our fitting does not rely on previous frames or frontal facial initialization and is robust to imperfect face detections.",
"It has been recently shown that neural networks can recover the geometric structure of a face from a single given image. A common denominator of most existing face geometry reconstruction methods is the restriction of the solution space to some low-dimensional subspace. While such a model significantly simplifies the reconstruction problem, it is inherently limited in its expressiveness. As an alternative, we propose an Image-to-Image translation network that jointly maps the input image to a depth image and a facial correspondence map. This explicit pixel-based mapping can then be utilized to provide high quality reconstructions of diverse faces under extreme expressions, using a purely geometric refinement process. In the spirit of recent approaches, the network is trained only with synthetic data, and is then evaluated on “in-the-wild” facial images. Both qualitative and quantitative analyses demonstrate the accuracy and the robustness of our approach.",
"In this paper we propose to learn a mapping from image pixels into a dense template grid through a fully convolutional network. We formulate this task as a regression problem and train our network by leveraging upon manually annotated facial landmarks in-the-wild. We use such landmarks to establish a dense correspondence field between a three-dimensional object template and the input image, which then serves as the ground-truth for training our regression system. We show that we can combine ideas from semantic segmentation with regression networks, yielding a highly-accurate quantized regression architecture. Our system, called DenseReg, allows us to estimate dense image-to-template correspondences in a fully convolutional manner. As such our network can provide useful correspondence information as a stand-alone system, while when used as an initialization for Statistical Deformable Models we obtain landmark localization results that largely outperform the current state-of-the-art on the challenging 300W benchmark. We thoroughly evaluate our method on a host of facial analysis tasks, and demonstrate its use for other correspondence estimation tasks, such as the human body and the human ear. DenseReg code is made available at http: alpguler.com DenseReg.html along with supplementary materials."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | Faces under perspective projection The effect of perspective transformation on face appearance has been studied from both a computational and psychological perspective previously. In psychology, @cite_15 @cite_8 show that human face recognition performance is degraded by perspective transformation. @cite_1 @cite_20 investigated a different effect, noting that perspective distortion influences social judgements of faces. In art history, @cite_10 discuss how uncertainty regarding subject-artist distance when viewing a painting results in distorted perception. They show that perceptions of body weight from face images are influenced by subject-camera distance. | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"1987787919",
"2019196811",
"2016042371",
"2073523523",
"2126889360"
],
"abstract": [
"Recognition of unfamiliar faces is susceptible to image differences caused by angular sizes subtended from the face to the camera. Research on perception of cubes suggests that apparent distortions of a shape due to large camera angle are correctable by placing the observer at the centre of projection, especially when visibility of the picture surface is low (Yang and Kubovy, 1999 Perception & Psychophysics 61 456^467). To explore the implication of this finding for face perception, observers performed recognition and matching tasks where face images with reduced visibility of picture surface were shown with observers either at the centre of projec- tion or at other viewpoints. The results show that, unlike perception of cubes, the effect of perspective transformation on face recognition is largely unaffected by the centre of projection. Furthermore, the use of perspective cues is not affected by textured surfaces. The limitation of perspective in restoring 3-D information of faces suggests a stronger role for image-based, rather than model-based, processes in recognition of unfamiliar faces.",
"",
"The effect of perspective transformation on transfer of face training was investigated in a yes no recognition task using face stimuli with 42°, 10°, or no perspective convergence. A strong dependence of recognition performance on the magnitude of perspective transformation was found, with large perspective changes such as from 42° at learning to orthogonal at test producing the strongest impairment and small perspective changes such as from 10° at learning to orthogonal at test the least. In a second experiment, the internal and external features of a face from different perspective convergence were artificially combined to produce identical local features between this composite image and the original but producing an impossible perspective transformation from either. The results of transfer between the composite and untouched images showed face recognition to be strongly affected by local featural similarities and relatively insensitive to global coherence of perspective transformation.",
"The authors discuss the limitations of photography in producing representations that lead to the accurate perception of shapes. In particular, they consider two situations in which the photographic representation, although an accurate reproduction of the geometry of the twodimensional image in the eye, does not capture the way human vision changes this geometry to produce a three-dimensionally accurate perception. When looking at a photograph, the viewer’s uncertainty of the camera-to-subject distance and the fact that, unnaturally, a photograph presents almost exactly the same view of an object to the two eyes result in substantially distorted perceptions. These most commonly result in a perceived flattening and fattening of the 3D shape of the object being photographed.",
"The basis on which people make social judgments from the image of a face remains an important open problem in fields ranging from psychology to neuroscience and economics. Multiple cues from facial appearance influence the judgments that viewers make. Here we investigate the contribution of a novel cue: the change in appearance due to the perspective distortion that results from viewing distance. We found that photographs of faces taken from within personal space elicit lower investments in an economic trust game, and lower ratings of social traits (such as trustworthiness, competence, and attractiveness), compared to photographs taken from a greater distance. The effect was replicated across multiple studies that controlled for facial image size, facial expression and lighting, and was not explained by face width-to-height ratio, explicit knowledge of the camera distance, or whether the faces are perceived as typical. These results demonstrate a novel facial cue influencing a range of social judgments as a function of interpersonal distance, an effect that may be processed implicitly."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | There have been two recent attempts to address the problem of estimating subject-camera distance from monocular, perspective views of a face . The idea is that the configuration of projected 2D face features conveys something about the degree of perspective transformation. @cite_14 approach the problem using exemplar 3D face models. They fit the models to 2D landmarks using perspective-n-point and use the mean of the estimated distances as the estimated subject-camera distance. @cite_3 on the other hand work entirely in 2D. They present a fully automated process for estimating 2D landmark positions to which they apply a linear normalisation. Their idea is to describe 2D landmarks in terms of their offset from mean positions, with the mean calculated either across views at different distances of the same face, or across multiple identities at the same distance. They can then perform regression to relate offsets to distance. They compare performance to humans and show that they are relatively bad at judging distance given only a single image. | {
"cite_N": [
"@cite_14",
"@cite_3"
],
"mid": [
"2102069400",
"2159528054"
],
"abstract": [
"We present a method for estimating the distance between a camera and a human head in 2D images from a calibrated camera. Leading head pose estimation algorithms focus mainly on head orienta- tion (yaw, pitch, and roll) and translations perpendicular to the cam- era principal axis. Our contribution is a system that can estimate head pose under large translations parallel to the camera's principal axis. Our method uses a set of exemplar 3D human heads to estimate the dis- tance between a camera and a previously unseen head. The distance is estimated by solving for the camera pose using Effective Perspective n-Point (EPnP). We present promising experimental results using the Texas 3D Face Recognition Database.",
"We propose the first automated method for estimating distance from frontal pictures of unknown faces. Camera calibration is not necessary, nor is the reconstruction of a 3D representation of the shape of the head. Our method is based on estimating automatically the position of face and head landmarks in the image, and then using a regressor to estimate distance from such measurements. We collected and annotated a dataset of frontal portraits of 53 individuals spanning a number of attributes (sex, age, race, hair), each photographed from seven distances. We find that our proposed method outperforms humans performing the same task. We observe that different physiognomies will bias systematically the estimate of distance, i.e. some people look closer than others. We expire which landmarks are more important for this task."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | Our results highlight the difficulty that both of these approaches face. Namely that many interpretations of 2D facial landmarks are possible, all with varying subject-camera distance. We approach the problem in a different way by showing how to solve for shape parameters when the subject-camera distance is known. We can then show that multiple explanations are possible. The perspective ambiguity is hinted at in the literature, e.g. @cite_12 state we found that it is beneficial to keep the focal length constant in most cases, due to its ambiguity with @math '', but never explored in a rigourous manner. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2804621595"
],
"abstract": [
"3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and are among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (“in-the-wild”). In this paper, we propose the first “in-the-wild” 3DMM by combining a statistical model of facial identity and expression shape with an “in-the-wild” texture model. We show that such an approach allows for the development of a greatly simplified fitting procedure for images and videos, as there is no need to optimise with regards to the illumination parameters. We have collected three new benchmarks that combine “in-the-wild” images and video with ground truth 3D facial geometry, the first of their kind, and report extensive quantitative evaluations using them that demonstrate our method is state-of-the-art."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | @cite_33 explore the effect of perspective in a synthesis application. They use a 3D head model to compute a 2D warp to simulate the effect of changing the subject-camera distance, allowing them to approximate appearance at any distance given a single image. @cite_28 also proposed a method to warp a 2D image to compensate for perspective. However, their goal was to improve the performance of face recognition systems that they showed are sensitive to such transformations. | {
"cite_N": [
"@cite_28",
"@cite_33"
],
"mid": [
"1919430093",
"2468014222"
],
"abstract": [
"We describe a method to model perspective distortion as a one-parameter family of warping functions. This can be used to mitigate its effects on face recognition, or synthesis to manipulate the perceived characteristics of a face. The warps are learned from a novel dataset and, by comparing one-parameter families of images, instead of images themselves, we show the effects on face recognition, which are most significant when small focal lengths are used. Additional applications are presented to image editing, videoconference, and multi-view validation of recognition systems.",
"This paper introduces a method to modify the apparent relative pose and distance between camera and subject given a single portrait photo. Our approach fits a full perspective camera and a parametric 3D head model to the portrait, and then builds a 2D warp in the image plane to approximate the effect of a desired change in 3D. We show that this model is capable of correcting objectionable artifacts such as the large noses sometimes seen in \"selfies,\" or to deliberately bring a distant camera closer to the subject. This framework can also be used to re-pose the subject, as well as to create stereo pairs from an input portrait. We show convincing results on both an existing dataset as well as a new dataset we captured to validate our method."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | @cite_29 investigate ambiguities from a perceptual point of view. They explore whether, after seeing a frontal view, participants accept a 3D reconstruction as the correct profile as often as they do for the original profile. It shows that human observers consider the reconstructed shape equally plausible as ground truth, even if it differs significantly from ground truth and even if choices include the original profile of the face. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2141418530"
],
"abstract": [
"Manipulated versions of three-dimensional faces that have different profiles, but almost the same appearance in frontal views, provide a novel way to investigate if and how humans use class-specific knowledge to infer depth from images of faces. After seeing a frontal view, participants have to select the profile that matches that view. The profiles are original (ground truth), average, random other, and two solutions computed with a linear face model (3D Morphable Model). One solution is based on 2D vertex positions, the other on pixel colors in the frontal view. The human responses demonstrate that humans neither guess nor just choose the average profile. The results also indicate that humans actually use the information from the front view, and not just rely on the plausibility of the profiles per se. All our findings are perfectly consistent with a correlation-based inference in a linear face model. The results also verify that the 3D reconstructions from our computational algorithms (stimuli 4 and 5) are similar to what humans expect, because they are chosen to be the true profile equally often as the ground-truth profiles. Our experiments shed new light on the mechanisms of human face perception and present a new quality measure for 3D reconstruction algorithms."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | Other ambiguities There are other known ambiguities in the monocular estimation of 3D shape. The bas relief ambiguity arises in photometric stereo with unknown light source directions. A continuous class of surfaces (differing by a linear transformation) can produce the same set of images when an appropriate transformation is applied to the illumination and albedo. For the particular case of faces, @cite_4 resolve this ambiguity by exploiting the symmetries and similarities in faces. Specifically they assume: bilateral symmetry; that the forehead and chin should be at approximately the same depth; and that the range of facial depths is about twice the distance between the eyes. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2123921160"
],
"abstract": [
"We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | More generally, ambiguities in surface reconstruction have been considered in a number of settings. @cite_6 consider the problem of reconstructing a smooth surface from local information that contains a discrete ambiguity. The ambiguities studied here are in the local surface orientation or gradient, a problem that occurs in photometric shape reconstruction. @cite_31 study the ambiguities that arise in monocular nonrigid structure from motion under perspective projection. | {
"cite_N": [
"@cite_31",
"@cite_6"
],
"mid": [
"2104051215",
"1483764956"
],
"abstract": [
"We study from a theoretical standpoint the ambiguities that occur when tracking a generic deformable surface under monocular perspective projection given 3D to 2D correspondences. We show that, additionally to the known scale ambiguity, a set of potential ambiguities can be clearly identified. From this, we deduce a minimal set of constraints required to disambiguate the problem and incorporate them into a working algorithm that runs on real noisy data.",
"We consider the problem of reconstructing a smooth surface under constraints that have discrete ambiguities. These problems arise in areas such as shape from texture, shape from shading, photometric stereo and shape from defocus. While the problem is computationally hard, heuristics based on semidefinite programming may reveal the shape of the surface."
]
} |
1708.06703 | 2963451207 | A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method. | Like us, @cite_30 also explore ambiguities in shape-from-landmarks in the context of objects represented by a linear basis (in their case, nonrigid deformations of an object rather than the space of faces). However, unlike in this paper, they assume that the intrinsic camera parameters are known. Hence, they do not model the perspective ambiguity that we describe (in which a change in distance is compensated by a change in focal length). Different to our flexibility modes, instead of analytically deriving a subspace, they use stochastic sampling to explore the set of possible solutions. They attempt to select from within this space using additional information provided by motion or shading. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2033189232"
],
"abstract": [
"Recovering the 3D shape of deformable surfaces from single images is known to be a highly ambiguous problem because many different shapes may have very similar projections. This is commonly addressed by restricting the set of possible shapes to linear combinations of deformation modes and by imposing additional geometric constraints. Unfortunately, because image measurements are noisy, such constraints do not always guarantee that the correct shape will be recovered. To overcome this limitation, we introduce a stochastic sampling approach to efficiently explore the set of solutions of an objective function based on point correspondences. This allows us to propose a small set of ambiguous candidate 3D shapes and then use additional image information to choose the best one. As a proof of concept, we use either motion or shading cues to this end and show that we can handle a complex objective function without having to solve a difficult nonlinear minimization problem. The advantages of our method are demonstrated on a variety of problems including both real and synthetic data."
]
} |
1708.06118 | 2745693767 | We present an approach for road segmentation that only requires image-level annotations at training time. We leverage distant supervision, which allows us to train our model using images that are different from the target domain. Using large publicly available image databases as distant supervisors, we develop a simple method to automatically generate weak pixel-wise road masks. These are used to iteratively train a fully convolutional neural network, which produces our final segmentation model. We evaluate our method on the Cityscapes dataset, where we compare it with a fully supervised approach. Further, we discuss the trade-off between annotation cost and performance. Overall, our distantly supervised approach achieves 93.8 of the performance of the fully supervised approach, while using orders of magnitude less annotation work. | A related approach to ours is webly supervised semantic segmentation @cite_25 , which can be regarded as a specific type of distantly supervised segmentation. It collects three sets of images: images of objects with white background, images of common background scenes, and images of objects with common background scenes. These three sets are effectively used to train FCNs. It is effective for segmenting foreground objects (e.g. car) but unlike our approach, it is not applicable for background objects like road, because it is not practical to collect road images with white background. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2606129492"
],
"abstract": [
"We propose a weakly supervised semantic segmentation algorithm that uses image tags for supervision. We apply the tags in queries to collect three sets of web images, which encode the clean foregrounds, the common backgrounds, and realistic scenes of the classes. We introduce a novel three-stage training pipeline to progressively learn semantic segmentation models. We first train and refine a class-specific shallow neural network to obtain segmentation masks for each class. The shallow neural networks of all classes are then assembled into one deep convolutional neural network for end-to-end training and testing. Experiments show that our method notably outperforms previous state-of-the-art weakly supervised semantic segmentation approaches on the PASCAL VOC 2012 segmentation benchmark. We further apply the class-specific shallow neural networks to object segmentation and obtain excellent results."
]
} |
1708.06118 | 2745693767 | We present an approach for road segmentation that only requires image-level annotations at training time. We leverage distant supervision, which allows us to train our model using images that are different from the target domain. Using large publicly available image databases as distant supervisors, we develop a simple method to automatically generate weak pixel-wise road masks. These are used to iteratively train a fully convolutional neural network, which produces our final segmentation model. We evaluate our method on the Cityscapes dataset, where we compare it with a fully supervised approach. Further, we discuss the trade-off between annotation cost and performance. Overall, our distantly supervised approach achieves 93.8 of the performance of the fully supervised approach, while using orders of magnitude less annotation work. | Road segmentation is often called free space estimation in autonomous driving. Free space is defined as space where a vehicle can drive safely without collision. Free space estimation is often solved by a geometric modeling approach, using more information such as stereo or consistency between frames @cite_3 @cite_18 . The consistency is also used in traditional monocular vision approaches @cite_30 . Other traditional approaches from monocular vision employ classifiers based on manually defined features @cite_29 @cite_17 . These features are learned by CNNs in modern approaches. The early work with CNNs utilizes a generic segmentation dataset for the road segmentation problem @cite_9 . The CNN at that time was patch-based, and was not as sophisticated as the state-of-the-art FCNs. Oliveira et al. @cite_26 investigated the performance of modern CNNs for road segmentation, and demonstrated its efficiency. While these approaches require pixel-wise annotations, we focus on the problem of training a road segmentation CNN only with image label annotations. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_29",
"@cite_9",
"@cite_3",
"@cite_17"
],
"mid": [
"2106337294",
"2112921976",
"",
"2168519618",
"33116912",
"161578567",
""
],
"abstract": [
"Vision–based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision–based road detection methods are usually based on low–level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low–level cues. Low–level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state–of–the–art methods.",
"We propose a general technique for modeling the visible road surface in front of a vehicle. The common assumption of a planar road surface is often violated in reality. A workaround proposed in the literature is the use of a piecewise linear or quadratic function to approximate the road surface. Our approach is based on representing the road surface as a general parametric B-spline curve. The surface parameters are tracked over time using a Kalman filter. The surface parameters are estimated from stereo measurements in the free space. To this end, we adopt a recently proposed road-obstacle segmentation algorithm to include disparity measurements and the B-spline road-surface representation. Experimental results in planar and undulating terrain verify the increase in free-space availability and accuracy using a flexible B-spline for road-surface modeling.",
"",
"By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.",
"Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7 compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8 compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined.",
"The computation of free space available in an environment is an essential task for many intelligent automotive and robotic applications. This paper proposes a new approach, which builds a stochastic occupancy grid to address the free space problem as a dynamic programming task. Stereo measurements are integrated over time reducing disparity uncertainty. These integrated measurements are entered into an occupancy grid, taking into account the noise properties of the measurements. In order to cope with real-time requirements of the application, three occupancy grid types are proposed. Their applicabilities and implementations are also discussed. Experimental results with real stereo sequences show the robustness and accuracy of the method. The current implementation of the method runs on off-the-shelf hardware at 20 Hz.",
""
]
} |
1708.06145 | 2745646453 | Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations. | Membership inference attacks. Such attacks aim to determine the presence of target individuals within a dataset. This is relevant in many settings, e.g., in the context of genomic research , where data inherently tied to sensitive information, such as health stats or physical traits, is commonly released in aggregate form for Genome Wide Association Studies (GWAS) @cite_36 . @cite_9 show that one can learn whether a target individual was part of a case-study group associated to a certain disease by comparing the target's profile against the aggregates of the case study and those of a reference population obtained from public sources. This attack has then been extended by @cite_0 to use correlations within the human genome, reducing the need for prior knowledge about the target. Also, @cite_20 show that membership inference can be mounted against individuals contributing their microRNA expressions to scientific studies. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_0",
"@cite_20"
],
"mid": [
"2116868464",
"2040228409",
"2141481372",
"2532520288"
],
"abstract": [
"The National Human Genome Research Institute (NHGRI) Catalog of Published Genome-Wide Association Studies (GWAS) Catalog provides a publicly available manually curated collection of published GWAS assaying at least 100000 singlenucleotide polymorphisms (SNPs) and all SNP-trait associations with P <110 5 . The Catalog includes 1751 curated publications of 11912 SNPs. In addition to the SNP-trait association data, the Catalog also publishes a quarterly diagram of all SNP-trait associations mapped to the SNPs’ chromosomal locations. The Catalog can be accessed via a tabular web interface, via a dynamic visualization on the human karyotype, as a downloadable tab-delimited file and as an OWL knowledge base. This article presents a number of recent improvements to the Catalog, including novel ways for users to interact with the Catalog and changes to the curation infrastructure.",
"We use high-density single nucleotide polymorphism (SNP) genotyping microarrays to demonstrate the ability to accurately and robustly determine whether individuals are in a complex genomic DNA mixture. We first develop a theoretical framework for detecting an individual's presence within a mixture, then show, through simulations, the limits associated with our method, and finally demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures, including mixtures where an individual contributes less than 0.1 of the total genomic DNA. These findings shift the perceived utility of SNPs for identifying individual trace contributors within a forensics mixture, and suggest future research efforts into assessing the viability of previously sub-optimal DNA sources due to sample contamination. These findings also suggest that composite statistics across cohorts, such as allele frequency or genotype counts, do not mask identity within genome-wide association studies. The implications of these findings are discussed.",
"Genome-wide association studies (GWAS) aim at discovering the association between genetic variations, particularly single-nucleotide polymorphism (SNP), and common diseases, which is well recognized to be one of the most important and active areas in biomedical research. Also renowned is the privacy implication of such studies, which has been brought into the limelight by the recent attack proposed by Homer's attack demonstrates that it is possible to identify a GWAS participant from the allele frequencies of a large number of SNPs. Such a threat, unfortunately, was found in our research to be significantly understated. In this paper, we show that individuals can actually be identified from even a relatively small set of statistics, as those routinely published in GWAS papers. We present two attacks. The first one extends Homer's attack with a much more powerful test statistic, based on the correlations among different SNPs described by coefficient of determination (r2). This attack can determine the presence of an individual from the statistics related to a couple of hundred SNPs. The second attack can lead to complete disclosure of hundreds of participants' SNPs, through analyzing the information derived from published statistics. We also found that those attacks can succeed even when the precisions of the statistics are low and part of data is missing. We evaluated our attacks on the real human genomes and concluded that such threats are completely realistic.",
"The continuous decrease in cost of molecular profiling tests is revolutionizing medical research and practice, but it also raises new privacy concerns. One of the first attacks against privacy of biological data, proposed by in 2008, showed that, by knowing parts of the genome of a given individual and summary statistics of a genome-based study, it is possible to detect if this individual participated in the study. Since then, a lot of work has been carried out to further study the theoretical limits and to counter the genome-based membership inference attack. However, genomic data are by no means the only or the most influential biological data threatening personal privacy. For instance, whereas the genome informs us about the risk of developing some diseases in the future, epigenetic biomarkers, such as microRNAs, are directly and deterministically affected by our health condition including most common severe diseases. In this paper, we show that the membership inference attack also threatens the privacy of individuals contributing their microRNA expressions to scientific studies. Our results on real and public microRNA expression data demonstrate that disease-specific datasets are especially prone to membership detection, offering a true-positive rate of up to 77 at a false-negative rate of less than 1 . We present two attacks: one relying on the L_1 distance and the other based on the likelihood-ratio test. We show that the likelihood-ratio test provides the highest adversarial success and we derive a theoretical limit on this success. In order to mitigate the membership inference, we propose and evaluate both a differentially private mechanism and a hiding mechanism. We also consider two types of adversarial prior knowledge for the differentially private mechanism and show that, for relatively large datasets, this mechanism can protect the privacy of participants in miRNA-based studies against strong adversaries without degrading the data utility too much. Based on our findings and given the current number of miRNAs, we recommend to only release summary statistics of datasets containing at least a couple of hundred individuals."
]
} |
1708.06145 | 2745646453 | Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations. | Another line of work focuses on membership inference in machine learning models. @cite_32 show that such models may leak information about data records on which they were trained. @cite_29 present active inference attacks on deep neural networks in collaborative settings, while @cite_5 focus on privacy leakage from generative models in Machine Learning as a Service applications. Moreover, @cite_15 recently evaluate membership inference in the context of data aggregation in smart metering , studying how many household readings need to be aggregated in order to protect privacy of individual profiles in a smart grid. | {
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_32",
"@cite_15"
],
"mid": [
"",
"2951368041",
"2949777041",
"2973273371"
],
"abstract": [
"",
"Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level DP applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).",
"We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial \"machine learning as a service\" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.",
"The widespread deployment of smart meters that frequently report energy consumption information, is a known threat to consumers’ privacy. Many promising privacy protection mechanisms based on secure aggregation schemes have been proposed. Even though these schemes are cryptographically secure, the energy provider has access to the plaintext aggregated power consumption. A privacy trade-off exists between the size of the aggregation scheme and the personal data that might be leaked, where smaller aggregation sizes leak more personal data. Recently, a UK industrial body has studied this privacy trade-off and identified that two smart meters forming an aggregate, are sufficient to achieve privacy. In this work, we challenge this study and investigate which aggregation sizes are sufficient to achieve privacy in the smart grid. Therefore, we propose a flexible, yet formal privacy metric using a cryptographic game based definition. Studying publiclyavailable, real world energy consumption datasets with various temporal resolutions, ranging from minutes to hourly intervals, we show that a typical household can be identified with very high probability. For example, we observe a 50 advantage over random guessing in identifying households for an aggregation size of 20 households with a 15-minutes reporting interval. Furthermore, our results indicate that single appliances can be identified with significant probability in aggregation sizes up to 10 households."
]
} |
1708.06145 | 2745646453 | Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations. | Differentially private mechanisms. Differential privacy (DP) @cite_1 can be used to mitigate membership inference, as its indistinguishability-based definition guarantees that the presence or the absence of an individual does not significantly affect the output of the data release. @cite_21 introduce a framework geared to formalize the notion of Positive vs Negative Membership Privacy, considering an adversary parameterized by her prior knowledge. However, to the best of our knowledge, no specific technique has been presented to instantiate their framework in our setting. Common mechanisms to achieve DP include using noise from the Laplacian @cite_1 or the Gaussian distribution @cite_22 (see ). | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_22"
],
"mid": [
"",
"2109426455",
"2610910029"
],
"abstract": [
"",
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution."
]
} |
1708.06145 | 2745646453 | Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations. | Specific to the context of spatio-temporal data are the techniques proposed by @cite_33 , who use synthetic data generation to release differentially private mobility patterns of commuters in Minnesota. Also, Rastogi and Nath @cite_6 propose an algorithm based on Discrete Fourier Transform to privately release aggregate time-series, while Acs and Castelluccia @cite_13 improve on @cite_6 and present a differentially private scheme tailored to the spatio-temporal density of Paris. Finally, @cite_25 release the entropy of certain locations with DP guarantees, and show how to achieve better utility although with weaker privacy notions. | {
"cite_N": [
"@cite_13",
"@cite_33",
"@cite_6",
"@cite_25"
],
"mid": [
"1993599520",
"2080044359",
"2104803737",
"2566523111"
],
"abstract": [
"With billions of handsets in use worldwide, the quantity of mobility data is gigantic. When aggregated they can help understand complex processes, such as the spread viruses, and built better transportation systems, prevent traffic congestion. While the benefits provided by these datasets are indisputable, they unfortunately pose a considerable threat to location privacy. In this paper, we present a new anonymization scheme to release the spatio-temporal density of Paris, in France, i.e., the number of individuals in 989 different areas of the city released every hour over a whole week. The density is computed from a call-data-record (CDR) dataset, provided by the French Telecom operator Orange, containing the CDR of roughly 2 million users over one week. Our scheme is differential private, and hence, provides provable privacy guarantee to each individual in the dataset. Our main goal with this case study is to show that, even with large dimensional sensitive data, differential privacy can provide practical utility with meaningful privacy guarantee, if the anonymization scheme is carefully designed. This work is part of the national project XData (http: xdata.fr) that aims at combining large (anonymized) datasets provided by different service providers (telecom, electricity, water management, postal service, etc.).",
"In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.",
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users.",
"Location entropy (LE) is a popular metric for measuring the popularity of various locations (e.g., points-of-interest). Unlike other metrics computed from only the number of (unique) visits to a location, namely frequency, LE also captures the diversity of the users' visits, and is thus more accurate than other metrics. Current solutions for computing LE require full access to the past visits of users to locations, which poses privacy threats. This paper discusses, for the first time, the problem of perturbing location entropy for a set of locations according to differential privacy. The problem is challenging because removing a single user from the dataset will impact multiple records of the database; i.e., all the visits made by that user to various locations. Towards this end, we first derive non-trivial, tight bounds for both local and global sensitivity of LE, and show that to satisfy e-differential privacy, a large amount of noise must be introduced, rendering the published results useless. Hence, we propose a thresholding technique to limit the number of users' visits, which significantly reduces the perturbation error but introduces an approximation error. To achieve better utility, we extend the technique by adopting two weaker notions of privacy: smooth sensitivity (slightly weaker) and crowd-blending (strictly weaker). Extensive experiments on synthetic and real-world datasets show that our proposed techniques preserve original data distribution without compromising location privacy."
]
} |
1708.06145 | 2745646453 | Aggregate location data is often used to support smart services and applications, e.g., generating live traffic maps or predicting visits to businesses. In this paper, we present the first study on the feasibility of membership inference attacks on aggregate location time-series. We introduce a game-based definition of the adversarial task, and cast it as a classification problem where machine learning can be used to distinguish whether or not a target user is part of the aggregates. We empirically evaluate the power of these attacks on both raw and differentially private aggregates using two mobility datasets. We find that membership inference is a serious privacy threat, and show how its effectiveness depends on the adversary's prior knowledge, the characteristics of the underlying location data, as well as the number of users and the timeframe on which aggregation is performed. Although differentially private mechanisms can indeed reduce the extent of the attacks, they also yield a significant loss in utility. Moreover, a strategic adversary mimicking the behavior of the defense mechanism can greatly limit the protection they provide. Overall, our work presents a novel methodology geared to evaluate membership inference on aggregate location data in real-world settings and can be used by providers to assess the quality of privacy protection before data release or by regulators to detect violations. | Location privacy. Previous location privacy research taking into account traces or profiles of single users @cite_17 @cite_4 @cite_3 @cite_34 @cite_14 @cite_10 does not apply to our problem, which focuses on aggregate location statistics. Closer to our work is our own PETS'17 paper @cite_7 , which shows that aggregate location time-series can be used by an adversary to improve her prior knowledge about users' location profiles. Also, @cite_26 present an attack that exploits the uniqueness and the regularity of human mobility, and extracts location trajectories from aggregate mobility data. As opposed to these efforts, which attempt to learn data about individuals (e.g., mobility profiles, trajectories) from the aggregates, we focus on inferring their membership to datasets, which, to the best of our knowledge, has not been studied before. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_7",
"@cite_3",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"2151908916",
"2126729912",
"2593227599",
"2952005775",
"2045686369",
"1971157729",
"2115240023",
"1536564267"
],
"abstract": [
"In recent years, the rapid spread of smartphones has led to the increasing popularity of Location-Based Social Networks (LBSNs). Although a number of research studies and articles in the press have shown the dangers of exposing personal location data, the inherent nature of LBSNs encourages users to publish information about their current location (i.e., their check-ins). The same is true for the majority of the most popular social networking websites, which offer the possibility of associating the current location of users to their posts and photos. Moreover, some LBSNs, such as Foursquare, let users tag their friends in their check-ins, thus potentially releasing location information of individuals that have no control over the published data. This raises additional privacy concerns for the management of location information in LBSNs. In this paper we propose and evaluate a series of techniques for the identification of users from their check-in data. More specifically, we first present two strategies according to which users are characterized by the spatio-temporal trajectory emerging from their check-ins over time and the frequency of visit to specific locations, respectively. In addition to these approaches, we also propose a hybrid strategy that is able to exploit both types of information. It is worth noting that these techniques can be applied to a more general class of problems where locations and social links of individuals are available in a given dataset. We evaluate our techniques by means of three real-world LBSNs datasets, demonstrating that a very limited amount of data points is sufficient to identify a user with a high degree of accuracy. For instance, we show that in some datasets we are able to classify more than 80 of the users correctly.",
"There is a rich collection of literature that aims at protecting the privacy of users querying location-based services. One of the most popular location privacy techniques consists in cloaking users' locations such that k users appear as potential senders of a query, thus achieving k-anonymity. This paper analyzes the effectiveness of k-anonymity approaches for protecting location privacy in the presence of various types of adversaries. The unraveling of the scheme unfolds the inconsistency between its components, mainly the cloaking mechanism and the k-anonymity metric. We show that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and argue that this technique may even be detrimental to users' location privacy. The uncovered flaws imply that existing k-anonymity scheme is a tattered cloak for protecting location privacy.",
"Human mobility data has been ubiquitously collected through cellular networks and mobile applications, and publicly released for academic research and commercial purposes for the last decade. Since releasing individual's mobility records usually gives rise to privacy issues, datasets owners tend to only publish aggregated mobility data, such as the number of users covered by a cellular tower at a specific timestamp, which is believed to be sufficient for preserving users' privacy. However, in this paper, we argue and prove that even publishing aggregated mobility data could lead to privacy breach in individuals' trajectories. We develop an attack system that is able to exploit the uniqueness and regularity of human mobility to recover individual's trajectories from the aggregated mobility data without any prior knowledge. By conducting experiments on two real-world datasets collected from both mobile application and cellular network, we reveal that the attack system is able to recover users' trajectories with accuracy about 73 91 at the scale of tens of thousands to hundreds of thousands users, which indicates severe privacy leakage in such datasets. Through the investigation on aggregated mobility data, our work recognizes a novel privacy problem in publishing statistic data, which appeals for immediate attentions from both academy and industry.",
"Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data.",
"We examine a very large-scale data set of more than 30 billion call records made by 25 million cell phone users across all 50 states of the US and attempt to determine to what extent anonymized location data can reveal private user information. Our approach is to infer, from the call records, the \"top N\" locations for each user and correlate this information with publicly-available side information such as census data. For example, the measured \"top 2\" locations likely correspond to home and work locations, the \"top 3\" to home, work, and shopping school commute path locations. We consider the cases where those \"top N\" locations are measured with different levels of granularity, ranging from a cell sector to whole cell, zip code, city, county and state. We then compute the anonymity set, namely the number of users uniquely identified by a given set of \"top N\" locations at different granularity levels. We find that the \"top 1\" location does not typically yield small anonymity sets. However, the top 2 and top 3 locations do, certainly at the sector or cell-level granularity. We consider a variety of different factors that might impact the size of the anonymity set, for example the distance between the \"top N\" locations or the geographic environment (rural vs urban). We also examine to what extent specific side information, in particular the size of the user's social network, decrease the anonymity set and therefore increase risks to privacy. Our study shows that sharing anonymized location data will likely lead to privacy risks and that, at a minimum, the data needs to be coarse in either the time domain (meaning the data is collected over short periods of time, in which case inferring the top N locations reliably is difficult) or the space domain (meaning the data granularity is strictly higher than the cell level). In both cases, the utility of the anonymized location data will be decreased, potentially by a significant amount.",
"",
"We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.",
"Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed."
]
} |
1708.04352 | 2746141389 | As demand drives systems to generalize to various domains and problems, the study of multitask, transfer and lifelong learning has become an increasingly important pursuit. In discrete domains, performance on the Atari game suite has emerged as the de facto benchmark for assessing multitask learning. However, in continuous domains there is a lack of agreement on standard multitask evaluation environments which makes it difficult to compare different approaches fairly. In this work, we describe a benchmark set of tasks that we have developed in an extendable framework based on OpenAI Gym. We run a simple baseline using Trust Region Policy Optimization and release the framework publicly to be expanded and used for the systematic comparison of multitask, transfer, and lifelong learning in continuous domains. | Several works investigate multitask or transfer learning with MuJoCo tasks. These tasks include: navigating around a wall (where a wall separates an agent from its goal); the OpenAI Gym Reacher environment with an added image state space of the environment; jumping over a wall using a model similar to the OpenAI Half-Cheetah environment @cite_4 ; varying the gravity of various standard OpenAI Gym benchmark environments (Reacher, Hopper, Humanoid, HalfCheetah) and transferring between the modified environments; adding motor noise to the same set of environments @cite_8 ; simulated grasping and stacking using a Jaco arm @cite_6 ; and several custom grasping and manipulation tasks to demonstrate learning invariant feature spaces @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_6",
"@cite_8"
],
"mid": [
"2605368761",
"2558634851",
"2952629144",
"2530944449"
],
"abstract": [
"People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where twp agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of analogy making,'' or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.",
"Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while deployed. However, this learning requires access to a reward function, which is often hard to measure in real-world domains, where the reward could depend on, for example, unknown positions of objects or the emotional state of the user. Conversely, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present or in a controlled setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect on its own? In this paper, we formalize this problem as semisupervised reinforcement learning, where the reward function can only be evaluated in a set of \"labeled\" MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of \"unlabeled\" MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent's own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.",
"Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.",
"Developing control policies in simulation is often more practical and safer than directly running experiments in the real world. This applies to policies obtained from planning and optimization, and even more so to policies obtained from reinforcement learning, which is often very data demanding. However, a policy that succeeds in simulation often doesn't work when deployed on a real robot. Nevertheless, often the overall gist of what the policy does in simulation remains valid in the real world. In this paper we investigate such settings, where the sequence of states traversed in simulation remains reasonable for the real world, even if the details of the controls are not, as could be the case when the key differences lie in detailed friction, contact, mass and geometry properties. During execution, at each time step our approach computes what the simulation-based control policy would do, but then, rather than executing these controls on the real robot, our approach computes what the simulation expects the resulting next state(s) will be, and then relies on a learned deep inverse dynamics model to decide which real-world action is most suitable to achieve those next states. Deep models are only as good as their training data, and we also propose an approach for data collection to (incrementally) learn the deep inverse dynamics model. Our experiments shows our approach compares favorably with various baselines that have been developed for dealing with simulation to real world model discrepancy, including output error control and Gaussian dynamics adaptation."
]
} |
1708.04352 | 2746141389 | As demand drives systems to generalize to various domains and problems, the study of multitask, transfer and lifelong learning has become an increasingly important pursuit. In discrete domains, performance on the Atari game suite has emerged as the de facto benchmark for assessing multitask learning. However, in continuous domains there is a lack of agreement on standard multitask evaluation environments which makes it difficult to compare different approaches fairly. In this work, we describe a benchmark set of tasks that we have developed in an extendable framework based on OpenAI Gym. We run a simple baseline using Trust Region Policy Optimization and release the framework publicly to be expanded and used for the systematic comparison of multitask, transfer, and lifelong learning in continuous domains. | Other works investigate using classical control systems and robotics simulations with a set of varied hyperparameters for each environment. These include: a simple mass spring damper task, cart-pole with continuous control; a three-link inverted pendulum with continuous control; a quadrotor control task @cite_14 ; a double-linked pendulum task; a modified cartpole balancing task which can transfer to physical system @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"2737821837",
"2106008664"
],
"abstract": [
"We present an approach to learning control policies for physical robots that achieves high efficiency by adjusting existing policies that have been learned on similar source systems, such as a similar robot with different physical parameters, or an approximate dynamics model simulator. This can be viewed as calibrating a policy learned on a source system, to match a desired behaviour in similar target systems. Our approach assumes that the trajectories described by the source robot are feasible on the target robot. By making this assumption, we only need to learn a mapping from the source robot state and action spaces to the target robot action space, which we call a policy adjustment model. We demonstrate our approach in simulation in the cart-pole balancing task and a two link double pendulum. We also validate our approach with a physical cart-pole system, where we adjust a learned policy under changes to the weight of the pole.",
"Policy gradient algorithms have shown considerable recent success in solving high-dimensional sequential decision making tasks, particularly in robotics. However, these methods often require extensive experience in a domain to achieve high performance. To make agents more sample-efficient, we developed a multi-task policy gradient method to learn decision making tasks consecutively, transferring knowledge between tasks to accelerate learning. Our approach provides robust theoretical guarantees, and we show empirically that it dramatically accelerates learning on a variety of dynamical systems, including an application to quadrotor control."
]
} |
1708.04321 | 2745593907 | The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers, yet its performance competes with the most complex classifiers in the literature. The core of this classifier depends mainly on measuring the distance or similarity between the tested example and the training examples. This raises a major question about which distance measures to be used for the KNN classifier among a large number of distance and similarity measures? This review attempts to answer the previous question through evaluating the performance (measured by accuracy, precision and recall) of the KNN using a large number of distance measures, tested on a number of real world datasets, with and without adding different levels of noise. The experimental results show that the performance of KNN classifier depends significantly on the distance used, the results showed large gaps between the performances of different distances. We found that a recently proposed non-convex distance performed the best when applied on most datasets comparing to the other tested distances. In addition, the performance of the KNN degraded only about @math while the noise level reaches @math , this is true for all the distances used. This means that the KNN classifier using any of the top @math distances tolerate noise to a certain degree. Moreover, the results show that some distances are less affected by the added noise comparing to other distances. | @cite_17 analyzed the impact of five distance metrics, namely Euclidean, Manhattan, Canberra, Chebychev and Minkowsky in instance-based learning algorithms. Particularly, 1-NN Classifier and the Incremental Hypersphere Classifier (IHC) Classifier, they reported the results of their empirical evaluation on fifteen datasets with different sizes showing that the Euclidean and Manhattan metrics significantly yield good results comparing to the other tested distances. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1146581459"
],
"abstract": [
"In this paper we analyze the impact of distinct distance metrics in instance-based learning algorithms. In particular, we look at the well-known 1-Nearest Neighbor (NN) algorithm and the Incremental Hypersphere Classifier (IHC) algorithm, which proved to be efficient in large-scale recognition problems and online learning. We provide a detailed empirical evaluation on fifteen datasets with several sizes and dimensionality. We then statistically show that the Euclidean and Manhattan metrics significantly yield good results in a wide range of problems. However, grid-search like methods are often desirable to determine the best matching metric depending on the problem and algorithm."
]
} |
1708.04321 | 2745593907 | The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers, yet its performance competes with the most complex classifiers in the literature. The core of this classifier depends mainly on measuring the distance or similarity between the tested example and the training examples. This raises a major question about which distance measures to be used for the KNN classifier among a large number of distance and similarity measures? This review attempts to answer the previous question through evaluating the performance (measured by accuracy, precision and recall) of the KNN using a large number of distance measures, tested on a number of real world datasets, with and without adding different levels of noise. The experimental results show that the performance of KNN classifier depends significantly on the distance used, the results showed large gaps between the performances of different distances. We found that a recently proposed non-convex distance performed the best when applied on most datasets comparing to the other tested distances. In addition, the performance of the KNN degraded only about @math while the noise level reaches @math , this is true for all the distances used. This means that the KNN classifier using any of the top @math distances tolerate noise to a certain degree. Moreover, the results show that some distances are less affected by the added noise comparing to other distances. | @cite_1 investigated the effect of Euclidean, Manhattan and Hassanat distance metrics on the performance of the KNN classifier, with K ranging from @math to the square root of the size of the training set, considering only the odd K's. In addition to experimenting on other classifiers such as the Ensemble Nearest Neighbor classifier (ENN) , and the Inverted Indexes of Neighbors Classifier (IINC) . Their experiments were conducted on @math datasets taken from the UCI machine learning repository, the reported results show that Hassanat distance outperformed both of Manhattan and Euclidean distances in most of the tested datasets using the three investigated classifiers. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2288452560"
],
"abstract": [
"We showed in this work how the Hassanat distance metric enhances the performance of the nearest neighb our classifiers. The results demonstrate the superiority of this distanc e metric over the traditional and most-used distanc es, such as Manhattan distance and Euclidian distance. Moreover, we prove d that the Hassanat distance metric is invariant to data scale, noise and outliers. Throughout this work, it is clearly notab le that both ENN and IINC performed very well with the distance investigated, as their accuracy increased significantly by 3.3 a nd 3.1 respectively, with no significant advantage of the ENN over the IINC in terms of accuracy. Correspondingly, it can be no ted from our results that there is no optimal algor ithm that can solve all reallife problems perfectly; this is supported by the n o-free-lunch theorem."
]
} |
1708.04400 | 2746274177 | Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results. | Over the years, many approaches have tackled the problem of video semantic segmentation. In particular, much research has been done in the context of fully-supervised semantic segmentation, including methods based on CNNs @cite_22 @cite_19 @cite_25 and on graphical models @cite_31 @cite_45 @cite_16 . Here, however, we focus the discussion on the methods that do not require fully-annotated training data, which is typically expensive to obtain. | {
"cite_N": [
"@cite_22",
"@cite_19",
"@cite_45",
"@cite_31",
"@cite_16",
"@cite_25"
],
"mid": [
"2963866581",
"2963391479",
"1961270558",
"2461677039",
"",
"2559719760"
],
"abstract": [
"Recent years have seen tremendous progress in still-image segmentation; however the naive application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: (1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and (2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of “clockwork” convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets.",
"Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive preprocessing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"We present an approach to long-range spatio-temporal regularization in semantic video segmentation. Temporal regularization in video is challenging because both the camera and the scene may be in motion. Thus Euclidean distance in the space-time volume is not a good proxy for correspondence. We optimize the mapping of pixels to a Euclidean feature space so as to minimize distances between corresponding points. Structured prediction is performed by a dense CRF that operates on the optimized features. Experimental results demonstrate that the presented approach increases the accuracy and temporal consistency of semantic video segmentation.",
"",
"In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to predict future frames and their parsing maps (if available) given only historical frames. In this way, the network can effectively learn to capture video dynamics and temporal context, which are critical clues for video scene parsing, without requiring extra manual annotations. (2) architecture that effectively adapts the learned spatiotemporal features to scene parsing tasks and provides strong guidance for any off-the-shelf parsing model to achieve better video scene parsing performance. Extensive experiments over two challenging datasets, Cityscapes and Camvid, have demonstrated the effectiveness of our methods by showing significant improvement over well-established baselines."
]
} |
1708.04400 | 2746274177 | Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results. | By contrast, weakly-supervised semantic segmentation methods tackle the challenging scenario where only weak annotations, e.g., tags, are given as labels. Much research in this context has been done for still images @cite_37 @cite_38 @cite_24 @cite_59 @cite_1 @cite_62 @cite_4 @cite_57 @cite_34 @cite_12 @cite_36 @cite_54 @cite_13 @cite_40 @cite_20 . In particular, most recent methods build on deep networks by making use of objectness criteria @cite_26 , object proposals @cite_40 @cite_54 @cite_41 , saliency maps @cite_13 @cite_36 @cite_55 @cite_20 , localization cues @cite_29 @cite_12 , convolutional activations @cite_57 , motion cues @cite_46 and constraints related to the objects @cite_34 @cite_4 . Since the basic networks have been pre-trained for object recognition, and thus focus on foreground classes, these methods are inherently unable to differentiate multiple background classes. | {
"cite_N": [
"@cite_36",
"@cite_41",
"@cite_54",
"@cite_29",
"@cite_20",
"@cite_38",
"@cite_4",
"@cite_46",
"@cite_37",
"@cite_26",
"@cite_55",
"@cite_57",
"@cite_40",
"@cite_34",
"@cite_12",
"@cite_62",
"@cite_1",
"@cite_24",
"@cite_59",
"@cite_13"
],
"mid": [
"2133515615",
"2520746254",
"2291422229",
"2560351516",
"2950775481",
"1993433125",
"2221898772",
"2310700882",
"1901229278",
"",
"",
"2950468556",
"1945608308",
"2952004933",
"2951358285",
"2026581312",
"1931270512",
"2203062554",
"2158427031",
"2519610629"
],
"abstract": [
"Recently, significant improvement has been made on semantic object segmentation due to the development of deep convolutional neural networks (DCNNs). Training such a DCNN usually relies on a large number of images with pixel-level segmentation masks, and annotating these images is very costly in terms of both finance and human effort. In this paper, we propose a simple to complex (STC) framework in which only image-level annotations are utilized to learn DCNNs for semantic segmentation. Specifically, we first train an initial segmentation network called Initial-DCNN with the saliency maps of simple images (i.e., those with a single category of major object(s) and clean background). These saliency maps can be automatically obtained by existing bottom-up salient object detection techniques, where no supervision information is needed. Then, a better network called Enhanced-DCNN is learned with supervision from the predicted segmentation masks of simple images based on the Initial-DCNN as well as the image-level annotations. Finally, more pixel-level segmentation masks of complex images (two or more categories of objects with cluttered background), which are inferred by using Enhanced-DCNN and image-level annotations, are utilized as the supervision information to learn the Powerful-DCNN for semantic segmentation. Our method utilizes 40K simple images from Flickr.com and 10K complex images from PASCAL VOC for step-wisely boosting the segmentation network. Extensive experimental results on PASCAL VOC 2012 segmentation benchmark well demonstrate the superiority of the proposed STC framework compared with other state-of-the-arts.",
"Training neural networks for semantic segmentation is data hungry. Meanwhile annotating a large number of pixel-level segmentation masks needs enormous human effort. In this paper, we propose a framework with only image-level supervision. It unifies semantic segmentation and object localization with important proposal aggregation and selection modules. They greatly reduce the notorious error accumulation problem that commonly arises in weakly supervised learning. Our proposed training algorithm progressively improves segmentation performance with augmented feedback in iterations. Our method achieves decent results on the PASCAL VOC 2012 segmentation data, outperforming previous image-level supervised methods by a large margin.",
"Recently, deep convolutional neural networks (DCNNs) have significantly promoted the development of semantic image segmentation. However, previous works on learning the segmentation network often rely on a large number of ground-truths with pixel-level annotations, which usually require considerable human effort. In this paper, we explore a more challenging problem by learning to segment under image-level annotations. Specifically, our framework consists of two components. First, reliable hypotheses based localization maps are generated by incorporating the hypotheses-aware classification and cross-image contextual refinement. Second, the segmentation network can be trained in a supervised manner by these generated localization maps. We explore two network training strategies for achieving good segmentation performance. For the first strategy, a novel multi-label cross-entropy loss is proposed to train the network by directly using multiple localization maps for all classes, where each pixel contributes to each class with different weights. For the second strategy, the rough segmentation mask can be inferred from the localization maps, and then the network is optimized based on the single-label cross-entropy loss with the produced masks. We evaluate our methods on the PASCAL VOC 2012 segmentation benchmark. Extensive experimental results demonstrate the effectiveness of the proposed methods compared with the state-of-the-arts. HighlightsLocalization map generation is proposed by using the hypothesis-based classification.A novel multi-label loss is proposed to train the network based on localization maps.An effective method is proposed to predict the rough mask of the given training image.Our methods achieve new state-of-the-art results on PASCAL VOC 2012 benchmark.",
"We propose an approach for learning category-level semantic segmentation purely from image-level classification tags indicating presence of categories. It exploits localization cues that emerge from training classification-tasked convolutional networks, to drive a \"self-supervision\" process that automatically labels a sparse, diverse training set of points likely to belong to classes of interest. Our approach has almost no hyperparameters, is modular, and allows for very fast training of segmentation in less than 3 minutes. It obtains competitive results on the VOC 2012 segmentation benchmark. More, significantly the modularity and fast training of our framework allows new classes to efficiently added for inference.",
"There have been remarkable improvements in the semantic labelling task in the recent years. However, the state of the art methods rely on large-scale pixel-level annotations. This paper studies the problem of training a pixel-wise semantic labeller network from image-level annotations of the present object classes. Recently, it has been shown that high quality seeds indicating discriminative object regions can be obtained from image-level labels. Without additional information, obtaining the full extent of the object is an inherently ill-posed problem due to co-occurrences. We propose using a saliency model as additional information and hereby exploit prior knowledge on the object extent and image statistics. We show how to combine both information sources in order to recover 80 of the fully supervised performance - which is the new state of the art in weakly supervised training for pixel-wise semantic labelling. The code is available at this https URL.",
"We tackle the problem of weakly labeled semantic segmentation, where the only source of annotation are image tags encoding which classes are present in the scene. This is an extremely difficult problem as no pixel-wise labelings are available, not even at training time. In this paper, we show that this problem can be formalized as an instance of learning in a latent structured prediction framework, where the graphical model encodes the presence and absence of a class as well as the assignments of semantic labels to superpixels. As a consequence, we are able to leverage standard algorithms with good theoretical properties. We demonstrate the effectiveness of our approach using the challenging SIFT-flow dataset and show average per-class accuracy improvements of 7 over the state-of-the-art.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.",
"Fully convolutional neural networks (FCNNs) trained on a large number of images with strong pixel-level annotations have become the new state of the art for the semantic segmentation task. While there have been recent attempts to learn FCNNs from image-level weak annotations , they need additional constraints, such as the size of an object , to obtain reasonable performance. To address this issue, we present motion-CNN (M-CNN), a novel FCNN framework which incorporates motion cues and is learned from video-level weak annotations. Our learning scheme to train the network uses motion segments as soft constraints, thereby handling noisy motion information. When trained on weakly-annotated videos, our method outperforms the state-of-the-art approach on the PASCAL VOC 2012 image segmentation benchmark. We also demonstrate that the performance of M-CNN learned with 150 weak video annotations is on par with state-of-the-art weakly-supervised methods trained with thousands of images. Finally, M-CNN substantially out-performs recent approaches in a related task of video co-localization on the YouTube-Objects dataset.",
"Image semantic segmentation is the task of partitioning image into several regions based on semantic concepts. In this paper, we learn a weakly supervised semantic segmentation model from social images whose labels are not pixel-level but image-level; furthermore, these labels might be noisy. We present a joint conditional random field model leveraging various contexts to address this issue. More specifically, we extract global and local features in multiple scales by convolutional neural network and topic model. Inter-label correlations are captured by visual contextual cues and label co-occurrence statistics. The label consistency between image-level and pixel-level is finally achieved by iterative refinement. Experimental results on two real-world image datasets PASCAL VOC2007 and SIFT-Flow demonstrate that the proposed approach outperforms state-of-the-art weakly supervised methods and even achieves accuracy comparable with fully supervised methods.",
"",
"",
"Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground background masks. Unfortunately these priors either require training pixel-level annotations bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract markedly more accurate masks from the pre-trained network itself, forgoing external objectness modules. This is accomplished using the activations of the higher-level convolutional layers, smoothed by a dense CRF. We demonstrate that our method, based on these masks and a weakly-supervised loss, outperforms the state-of-the-art tag-based weakly-supervised semantic segmentation techniques. Furthermore, we introduce a new form of inexpensive weak supervision yielding an additional accuracy boost.",
"We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.",
"We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.",
"We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.",
"We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods.",
"Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"We present a weakly-supervised approach to semantic segmentation. The goal is to assign pixel-level labels given only partial information, for example, image-level labels. This is an important problem in many application scenarios where it is difficult to get accurate segmentation or not feasible to obtain detailed annotations. The proposed approach starts with an initial coarse segmentation, followed by a spectral clustering approach that groups related image parts into communities. A community-driven graph is then constructed that captures spatial and feature relationships between communities while a label graph captures correlations between image labels. Finally, mapping the image level labels to appropriate communities is formulated as a convex optimization problem. The proposed approach does not require location information for image level labels and can be trained using partially labeled datasets. Compared to the state-of-the-art weakly supervised approaches, we achieve a significant performance improvement of 9 on MSRC-21 dataset and 11 on LabelMe dataset, while being more than 300 times faster.",
"We propose a novel method for weakly supervised semantic segmentation. Training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method predicts a class label for every pixel. Our main innovation is a multi-image model (MIM) - a graphical model for recovering the pixel labels of the training images. The model connects superpixels from all training images in a data-driven fashion, based on their appearance similarity. For generalizing to new test images we integrate them into MIM using a learned multiple kernel metric, instead of learning conventional classifiers on the recovered pixel labels. We also introduce an “objectness” potential, that helps separating objects (e.g. car, dog, human) from background classes (e.g. grass, sky, road). In experiments on the MSRC 21 dataset and the LabelMe subset of [18], our technique outperforms previous weakly supervised methods and achieves accuracy comparable with fully supervised methods.",
"In this paper, we deal with a weakly supervised semantic segmentation problem where only training images with image-level labels are available. We propose a weakly supervised semantic segmentation method which is based on CNN-based class-specific saliency maps and fully-connected CRF. To obtain distinct class-specific saliency maps which can be used as unary potentials of CRF, we propose a novel method to estimate class saliency maps which improves the method proposed by (2014) significantly by the following improvements: (1) using CNN derivatives with respect to feature maps of the intermediate convolutional layers with up-sampling instead of an input image; (2) subtracting the saliency maps of the other classes from the saliency maps of the target class to differentiate target objects from other objects; (3) aggregating multiple-scale class saliency maps to compensate lower resolution of the feature maps. After obtaining distinct class saliency maps, we apply fully-connected CRF by using the class maps as unary potentials. By the experiments, we show that the proposed method has outperformed state-of-the-art results with the PASCAL VOC 2012 dataset under the weakly-supervised setting."
]
} |
1708.04400 | 2746274177 | Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results. | Similarly, most weakly-supervised video semantic segmentation techniques also focus on modeling a single background class. In this context @cite_23 @cite_28 work in the even more constrained scenario, where only two classes are considered: foreground vs. background. By contrast, to differentiate multiple foreground classes, but still assuming a single background, @cite_42 relied on motion cues and @cite_7 made use of a huge amount of web-crawled data (4606 videos with 960,517 frames). | {
"cite_N": [
"@cite_28",
"@cite_42",
"@cite_7",
"@cite_23"
],
"mid": [
"2105297725",
"2113708607",
"2951746655",
"122025198"
],
"abstract": [
"The ubiquitous availability of Internet video offers the vision community the exciting opportunity to directly learn localized visual concepts from real-world imagery. Unfortunately, most such attempts are doomed because traditional approaches are ill-suited, both in terms of their computational characteristics and their inability to robustly contend with the label noise that plagues uncurated Internet content. We present CRANE, a weakly supervised algorithm that is specifically designed to learn under such conditions. First, we exploit the asymmetric availability of real-world training data, where small numbers of positive videos tagged with the concept are supplemented with large quantities of unreliable negative data. Second, we ensure that CRANE is robust to label noise, both in terms of tagged videos that fail to contain the concept as well as occasional negative videos that do. Finally, CRANE is highly parallelizable, making it practical to deploy at large scale without sacrificing the quality of the learned solution. Although CRANE is general, this paper focuses on segment annotation, where we show state-of-the-art pixel-level segmentation results on two datasets, one of which includes a training set of spatiotemporal segments from more than 20,000 videos.",
"We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster.",
"We propose a novel algorithm for weakly supervised semantic segmentation based on image-level class labels only. In weakly supervised setting, it is commonly observed that trained model overly focuses on discriminative parts rather than the entire object area. Our goal is to overcome this limitation with no additional human intervention by retrieving videos relevant to target class labels from web repository, and generating segmentation labels from the retrieved videos to simulate strong supervision for semantic segmentation. During this process, we take advantage of image classification with discriminative localization technique to reject false alarms in retrieved videos and identify relevant spatio-temporal volumes within retrieved videos. Although the entire procedure does not require any additional supervision, the segmentation annotations obtained from videos are sufficiently strong to learn a model for semantic segmentation. The proposed algorithm substantially outperforms existing methods based on the same level of supervision and is even as competitive as the approaches relying on extra annotations.",
"We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Specifically, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as \"dog\", without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classifiers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classifiers are further refined using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we confirm that our proposed methods can learn good object masks just by watching YouTube."
]
} |
1708.04400 | 2746274177 | Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results. | In the same setting of multiple foreground vs. single background, several methods have proposed to rely on additional supervision. For instance, @cite_48 relied on the CPMC @cite_56 region detector, which has been trained from pixel-level annotations, to segment foreground from background. @cite_50 and @cite_60 , object proposal methods trained from pixel-level and bounding box annotations, respectively, were employed. Similarly, @cite_35 relied on an object detector trained from bounding boxes. The method of @cite_21 utilized the FCN trained on PASCAL VOC in a fully-supervised manner to generate initial object segments. | {
"cite_N": [
"@cite_35",
"@cite_60",
"@cite_48",
"@cite_21",
"@cite_56",
"@cite_50"
],
"mid": [
"2517104666",
"2950249112",
"1920142129",
"",
"2046382188",
"2953264111"
],
"abstract": [
"We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS.",
"We segment moving objects in videos by ranking spatio-temporal segment proposals according to \"moving objectness\": how likely they are to contain a moving object. In each video frame, we compute segment proposals using multiple figure-ground segmentations on per frame motion boundaries. We rank them with a Moving Objectness Detector trained on image and motion fields to detect moving objects and discard over under segmentations or background parts of the scene. We extend the top ranked segments into spatio-temporal tubes using random walkers on motion affinities of dense point trajectories. Our final tube ranking consistently outperforms previous segmentation methods in the two largest video segmentation benchmarks currently available, for any number of proposals. Further, our per frame moving object proposals increase the detection rate up to 7 over previous state-of-the-art static proposal methods.",
"Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"",
"We present a novel framework to generate and rank plausible hypotheses for the spatial extent of objects in images using bottom-up computational processes and mid-level selection cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of Constrained Parametric Min-Cut problems (CPMC) on a regular image grid. In a subsequent step, we learn to rank the corresponding segments by training a continuous model to predict how likely they are to exhibit real-world regularities (expressed as putative overlap with ground truth) based on their mid-level region properties, then diversify the estimated overlap score using maximum marginal relevance measures. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC 2009 and 2010 data sets. In our companion papers [1], [2], we show that the algorithm can be used, successfully, in a segmentation-based visual object category recognition pipeline. This architecture ranked first in the VOC2009 and VOC2010 image segmentation and labeling challenges.",
"Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labeled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pretrained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modeling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods."
]
} |
1708.04483 | 2749224985 | Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. | It has been twenty years since Lenet was first applied to OCR in 1990 @cite_3 . Many algorithms have been developed to improve the performance of CNN, although the basic framework of CNN has not changed much ever since it was proposed. The large object recognition data set ILSVRC2012, also known as ImageNet @cite_16 , has greatly propelled the progress in this area. Some most well-known progress in CNN structure has been made along with the continuous improvements of the performance on ImageNet data set. After AlexNet was proposed on ILSVRC2012, there are some remarkable advances in CNN architecture @cite_17 @cite_24 @cite_20 @cite_7 @cite_10 . And also, there are some task-specific modifications on CNN structure @cite_21 @cite_6 @cite_12 @cite_0 @cite_34 . For example, in multi-resolution CNN @cite_21 @cite_6 @cite_12 , combining features in lower layers leads to a more detailed representation of an input image. MOP-CNN @cite_28 is another algorithm proposed to extract more powerful features. With a combination of VLAD and CNN, MOP-CNN extracts a multi-scale and robust feature. This algorithm does not actually change CNN structure, but utilizes a pre-trained CNN model and modifies the feature extraction procedure. | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_34",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"2097117768",
"1524680991",
"",
"2154579312",
"",
"",
"2332352554",
"2078106735",
"2108598243",
"2949650786",
"1998808035",
"1686810756",
"1849277567"
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.",
"",
"We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.",
"",
"",
"Recent advances in image classification mostly rely on the use of powerful local features combined with an adapted image representation. Although Convolutional Neural Network (CNN) features learned from ImageNet were shown to be generic and very efficient, they still lack of flexibility to take into account variations in the spatial layout of visual elements. In this paper, we investigate the use of structural representations on top of pretrained CNN features to improve image classification. Images are represented as strings of CNN features. Similarities between such representations are computed using two new edit distance variants adapted to the image classification domain. Our algorithms have been implemented and tested on several challenging datasets, 15Scenes, Caltech101, Pascal VOC 2007 and MIT indoor. The results show that our idea of using structural string representations and distances clearly improves the classification performance over standard approaches based on CNN and SVM with linear kernel, as well as other recognized methods of the literature. HighlightsA structural representation of images on top of CNN features is proposed.Images are represented as strings to integrate spatial relationships.We introduce tailored string edit distances to compare images represented as strings.Experiments show that our structural approach is more powerful than existing ones.It also outperforms state-of-the-art CNN-based classification methods.",
"Nowadays crowd surveillance is an active area of research. Crowd surveillance is always affected by various conditions, such as different scenes, weather, or density of crowd, which restricts the real application. This paper proposes a convolutional neural network (CNN) based method to monitor the number of crowd flow, such as the number of entering or leaving people in high density crowd. It uses an indirect strategy of combining classification CNN with regression CNN, which is more robust than the direct way. A large enough database is built with lots of real videos of public gates, and plenty of experiments show that the proposed method performs well under various weather conditions no matter either in daytime or at night. HighlightsA method to estimate the number of crowd flow with CNN models is proposed.A database with 140 thousand samples from real scenes is build.The experiments perform robust under various scenes, weather or crowded condition.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1708.04483 | 2749224985 | Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. | Besides of exploring the overall structure of CNN, there are also many works that focus on each component of CNN. Locally connected layer @cite_54 @cite_56 loose the weight sharing constraint in normal convolution layer, and is suitable for face related tasks. Leaky ReLU @cite_49 adds a negative slope to the normal ReLU, to preserve information discarded by ReLU. PReLU @cite_2 further enhances this by making the negative slope learnable. Spatial Pyramid Pooling (SPP) @cite_37 extends max-pooling by enables CNN to avoid input warping or resizing and still produces fixed-length features. Inspired by Dropout @cite_55 , DropConnect @cite_19 regularize the CNN by randomly setting a subset of weights to zero within each layer. Spatial Dropout @cite_35 randomly sets some feature maps to zero entirely. DropSample @cite_38 randomly selects low confidence samples during training according to the output of CNN. The commonly used fully-connected layer can be transformed into convolution layer with kernel size @math , as shown in @cite_24 . With this transformation, CNN can take the input of any size and output classification maps. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_38",
"@cite_54",
"@cite_55",
"@cite_56",
"@cite_24",
"@cite_19",
"@cite_49",
"@cite_2"
],
"mid": [
"",
"2179352600",
"242877468",
"2145287260",
"2095705004",
"",
"",
"4919037",
"189277179",
""
],
"abstract": [
"",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Abstract Inspired by the theory of Leitner׳s learning box from the field of psychology, we propose DropSample , a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample , each training sample is associated with a quota function that is dynamically adjusted on the basis of the classification confidence given by the DCNN softmax output. After a learning iteration, samples with low confidence will have a higher frequency of being selected as training data; in contrast, well-trained and well-recognized samples with very high confidence will have a lower frequency of being involved in the ongoing training and can be gradually eliminated. As a result, the learning process becomes more efficient as it progresses. Furthermore, we investigate the use of domain-specific knowledge to enhance the performance of DCNN by adding a domain knowledge layer before the traditional CNN. By adopting DropSample together with different types of domain-specific knowledge, the accuracy of HCCR can be improved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1, and ICDAR 2013 online HCCR competition datasets yield outstanding recognition rates of 97.33 , 97.06 , and 97.51 respectively, all of which are significantly better than the previous best results reported in the literature.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"",
"",
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification. However, the performance of the networks is often limited by budget and time constraints, particularly when trying to train deep networks. Motivated by the problem of online handwriting recognition, we developed a CNN for processing spatially-sparse inputs; a character drawn with a one-pixel wide pen on a high resolution grid looks like a sparse matrix. Taking advantage of the sparsity allowed us more efficiently to train and test large, deep CNNs. On the CASIA-OLHWDB1.1 dataset containing 3755 character classes we get a test error of 3.82 . Although pictures are not sparse, they can be thought of as sparse by adding padding. Applying a deep convolutional network using sparsity has resulted in a substantial reduction in test error on the CIFAR small picture datasets: 6.28 on CIFAR-10 and 24.30 for CIFAR-100.",
""
]
} |
1708.04483 | 2749224985 | Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. | Based on the analysis of response latencies to a newly-presented image, there are two stages of visual processing: a pre-attentive phase and an attentional phase, corresponding to feedforward and recurrent processing respectively @cite_50 . And the feedback connections play an important role in the attentional phase @cite_15 @cite_36 . Different with feedforward connections which directly carry information, the feedback connections primarily play a modulatory role @cite_44 . Experiments have shown that recurrent processing contributes to making object recognition in degraded images more robust @cite_25 . | {
"cite_N": [
"@cite_36",
"@cite_44",
"@cite_50",
"@cite_15",
"@cite_25"
],
"mid": [
"1610511020",
"2095487159",
"2120907531",
"",
"2022638524"
],
"abstract": [
"A single visual stimulus activates neurons in many different cortical areas. A major challenge in cortical physiology is to understand how the neural activity in these numerous active zones leads to a unified percept of the visual scene. The anatomical basis for these interactions is the dense network of connections that link the visual areas. Within this network, feedforward connections transmit signals from lower-order areas such as V1 or V2 to higher-order areas. In addition, there is a dense web of feedback connections which, despite their anatomical prominence1,2,3,4, remain functionally mysterious5,6,7,8. Here we show, using reversible inactivation of a higher-order area (monkey area V5 MT), that feedback connections serve to amplify and focus activity of neurons in lower-order areas, and that they are important in the differentiation of figure from ground, particularly in the case of stimuli of low visibility. More specifically, we show that feedback connections facilitate responses to objects moving within the classical receptive field; enhance suppression evoked by background stimuli in the surrounding region; and have the strongest effects for stimuli of low salience.",
"This paper reports a dynamic causal modeling study of electrocorticographic (ECoG) data that addresses functional asymmetries between forward and backward connections in the visual cortical hierarchy. Specifically, we ask whether forward connections employ gamma-band frequencies, while backward connections preferentially use lower (beta-band) frequencies. We addressed this question by modeling empirical cross spectra using a neural mass model equipped with superficial and deep pyramidal cell populations—that model the source of forward and backward connections, respectively. This enabled us to reconstruct the transfer functions and associated spectra of specific subpopulations within cortical sources. We first established that Bayesian model comparison was able to discriminate between forward and backward connections, defined in terms of their cells of origin. We then confirmed that model selection was able to identify extrastriate (V4) sources as being hierarchically higher than early visual (V1) sources. Finally, an examination of the auto spectra and transfer functions associated with superficial and deep pyramidal cells confirmed that forward connections employed predominantly higher (gamma) frequencies, while backward connections were mediated by lower (alpha beta) frequencies. We discuss these findings in relation to current views about alpha, beta, and gamma oscillations and predictive coding in the brain.",
"An analysis of response latencies shows that when an image is presented to the visual system, neuronal activity is rapidly routed to a large number of visual areas.However,the activity of cortical neurons is not determined by this feedforward sweep alone. Horizontal connections within areas, and higher areas providing feedback, result in dynamic changes in tuning.The differences between feedforward and recurrent processing could prove pivotal in understanding the distinctions between attentive and pre-attentive vision as well as between conscious and unconscious vision. The feedforward sweep rapidly groups feature constellations that are hardwired in the visual brain, yet is probably incapable of yielding visual awareness; in many cases, recurrent processing is necessary before the features of an object are attentively grouped and the stimulus can enter consciousness. Trends Neurosci. (2000) 23, 571-579",
"",
"Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom-up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas."
]
} |
1708.04483 | 2749224985 | Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. | The idea of refining prediction is similar to cascading, which is a multistage ensemble learning algorithm. The subsequent stages focus on refining predictions of previous stages @cite_26 @cite_46 @cite_13 @cite_4 . For instance, state-of-the-art object detection algorithms adopt a two-stage pipeline @cite_4 . The region proposal network proposes object candidates in the first stage, and the detection network focus on classifying proposals in the following stage. Sun @cite_26 proposed three-stage cascaded convolutional neural networks for facial point detection, where the subsequent stage focus on giving more accurate keypoints estimation. Li @cite_46 @cite_1 proposed three-stage cascaded convolutional neural networks for face detection, where the first two stage quickly reject easy background regions, and the third stage carefully evaluates a small number of challenging candidates. Timofte employed a four-stage cascaded models to gradually refine the contents in image super-resolution. They kept the same settings for all the stages but models are trained per stage. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_46",
"@cite_13"
],
"mid": [
"1976948919",
"2613718673",
"2473640056",
"1934410531",
"2263468737"
],
"abstract": [
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.",
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.",
"In this paper we present seven techniques that everybody should know to improve example-based single image super resolution (SR): 1) augmentation of data, 2) use of large dictionaries with efficient search structures, 3) cascading, 4) image self-similarities, 5) back projection refinement, 6) enhanced prediction by consistency check, and 7) context reasoning. We validate our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial improvements. The techniques are widely applicable and require no changes or only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method sets new stateof-the-art results outperforming A+ by up to 0.9dB on average PSNR whilst maintaining a low time complexity."
]
} |
1708.04483 | 2749224985 | Recent years have witnessed the great success of convolutional neural network (CNN) based models in the field of computer vision. CNN is able to learn hierarchically abstracted features from images in an end-to-end training manner. However, most of the existing CNN models only learn features through a feedforward structure and no feedback information from top to bottom layers is exploited to enable the networks to refine themselves. In this paper, we propose a "Learning with Rethinking" algorithm. By adding a feedback layer and producing the emphasis vector, the model is able to recurrently boost the performance based on previous prediction. Particularly, it can be employed to boost any pre-trained models. This algorithm is tested on four object classification benchmark datasets: CIFAR-100, CIFAR-10, MNIST-background-image and ILSVRC-2012 dataset. These results have demonstrated the advantage of training CNN models with the proposed feedback mechanism. | The most related work on utilizing the recurrent neural network for object recognition would be dasNet @cite_53 . It makes use of a reinforcement learning strategy to iteratively adjust some weights of feature maps. And final classification results are made after several iterations. Our Learning with Rethinking" algorithm differs from dasNet in three major aspects. Firstly, we use a neural network to feedback information into lower layers, which is relatively easy to calculate. Secondly, we only use the posterior probability of previous feedforward pass as the feedback information, which is much more timely and spatially efficient. Thirdly, our algorithm can be regarded as a new further training algorithm which is easy to be applied to any pre-trained models, and will further boost the performance. Comparatively, dasNet needs to train from random initialization. | {
"cite_N": [
"@cite_53"
],
"mid": [
"2172010943"
],
"abstract": [
"Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNets feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model."
]
} |
1708.04587 | 2749960204 | Debate summarization is one of the novel and challenging research areas in automatic text summarization which has been largely unexplored. In this paper, we develop a debate summarization pipeline to summarize key topics which are discussed or argued in the two opposing sides of online debates. We view that the generation of debate summaries can be achieved by clustering, cluster labeling, and visualization. In our work, we investigate two different clustering approaches for the generation of the summaries. In the first approach, we generate the summaries by applying purely term-based clustering and cluster labeling. The second approach makes use of X-means for clustering and Mutual Information for labeling the clusters. Both approaches are driven by ontologies. We visualize the results using bar charts. We think that our results are a smooth entry for users aiming to receive the first impression about what is discussed within a debate topic containing waste number of argumentations. | Debate summarization is one of the novel research areas in automatic text summarization which has been largely unexplored @cite_0 . Examples of related work in debate summarization includes , , and . Contrastive Summarization is the study of generating summary for two entities and finding the difference in sentiments among them @cite_4 . This kind of summarization requires the classification of polarity in order to opinions expressed in different sentiments @cite_5 @cite_3 . summarized contrastive pairs of sentences by aligning positive and negative opinions on the same aspect. In this work, contrastive sentence pairs were constructed based on two criteria: 1) choose sentences that represent a major sentiment orientation; and 2) the two sentences should have opposite opinions on the same aspect. Similarity functions were used for determining contrastive sentence pairs. Then sentence pairs were used as input for generating the final summary. The summary was aimed to help readers compare the pros and cons of mixed opinions. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_4",
"@cite_3"
],
"mid": [
"1987971958",
"2590145195",
"1039328728",
"1569041844"
],
"abstract": [
"A new graphical display is proposed for partitioning techniques. Each cluster is represented by a so-called silhouette, which is based on the comparison of its tightness and separation. This silhouette shows which objects lie well within their cluster, and which ones are merely somewhere in between clusters. The entire clustering is displayed by combining the silhouettes into a single plot, allowing an appreciation of the relative quality of the clusters and an overview of the data configuration. The average silhouette width provides an evaluation of clustering validity, and might be used to select an ‘appropriate’ number of clusters.",
"Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogazici University Printhouse. http: www.issi2015.org files downloads all-papers 1042.pdf, 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.",
"Collective awareness about climate change is an ongoing problem because there is such a wealth of information available, which can be confusing, contradictory and difficult to interpret. In order to help citizens understand environmental concerns, and to help organisations better inform and target interested people with campaigns, we have developed an open source toolkit to analyse social media data on the topic of climate change. The toolkit comprises methods for extracting, aggregating, and visualising actionable knowledge, based on automatic analysis of large volumes of text. The key terms, topics and sentiments expressed in online discussions are extracted, along with key indicators of climate change, and are stored in a semantic search tool, which enables complex searches over the huge volumes of data. We describe a scenario using the toolkit to gain insights from a large collection of political tweets, showing how we can analyse this dataset for understanding engagement of the public with respect to the topic of climate change.",
"This paper presents a two-stage approach to summarizing multiple contrastive viewpoints in opinionated text. In the first stage, we use an unsupervised probabilistic approach to model and extract multiple viewpoints in text. We experiment with a variety of lexical and syntactic features, yielding significant performance gains over bag-of-words feature sets. In the second stage, we introduce Comparative LexRank, a novel random walk formulation to score sentences and pairs of sentences from opposite viewpoints based on both their representativeness of the collection as well as their contrastiveness with each other. Experimental results show that the proposed approach can generate informative summaries of viewpoints in opinionated text."
]
} |
1708.04585 | 2817318906 | The maximum capacity of fractal D2D (device-to-device) social networks with both direct and hierarchical communications is studied in this paper. Specifically, the fractal networks are characterized by the direct social connection and the self-similarity. Firstly, for a fractal D2D social network with direct social communications, it is proved that the maximum capacity is @math if a user communicates with one of his her direct contacts randomly, where @math denotes the total number of users in the network, and it can reach up to @math if any pair of social contacts with distance @math communicate according to the probability in proportion to @math . Secondly, since users might get in touch with others without direct social connections through the inter-connected multiple users, the fractal D2D social network with these hierarchical communications is studied as well, and the related capacity is further derived. Our results show that this capacity is mainly affected by the correlation exponent @math of the fractal structure. The capacity is reduced in proportional to @math if @math , while the reduction coefficient is @math if @math . | As a key component of future 5G cellular networks to improve throughput and spectral efficiency, D2D communication has been investigated in many contexts. Particularly, many literatures concerning with the D2D social networks have sprung up. For instance, @cite_5 analyzed the performance of relay-assisted multi-hop D2D communication where the decision to relay was made based on social comparison. In @cite_0 , the small size social communities were exploited for the resource allocation optimization in social-aware D2D communication. In order to alleviate the security issue in D2D social networks, a secure content sharing protocol was proposed in @cite_11 to meet the security requirements. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_11"
],
"mid": [
"2753471900",
"2754798795",
"2771681717"
],
"abstract": [
"Social-aware device-to-device (D2D) resource allocation utilizes social ties in human-formed social networks to allocate spectrum resources between D2D users and cellular users. In this letter, we consider the small size social communities formed by people with similar interests and exploit them to optimize the resource allocation of the communities. This results in an optimal graph matching problem among communities to solve the D2D resource allocation problem. Solutions are derived via bipartite graph matching and an effective small social community resource allocation algorithm corresponding to the cases of small and high D2D user loads.",
"Device-to-device (D2D) communications are recognized as a key enabler of future cellular networks, which will help to drive improvements in spectral efficiency and assist with the offload of network traffic. Relay-assisted D2D communications will be essential when there is an extended distance between the source and the destination or when the transmit power is constrained below a certain level. Although a number of works on relay-assisted D2D communications have been presented in the literature, most of those assume that relay nodes cooperate unequivocally. In reality, this cannot be assumed, since there is little incentive to cooperate without a guarantee of future reciprocal behavior. To incorporate the social behavior of D2D nodes, we consider the decision to relay using the donation game based on social comparison, characterize the probability of cooperation in an evolutionary context and then evaluate the network performance of relay-assisted D2D communications. Through numerical evaluations, we investigate the performance gap between the ideal case of 100 cooperation and practical scenarios with a lower cooperation probability. It shows that practical scenarios achieve lower transmission capacity and higher outage probability than idealistic network views, which assume full cooperation. After a sufficient number of generations, however, the cooperation probability follows the natural rules of evolution and the transmission performance of practical scenarios approach that of the full cooperation case, indicating that all D2D relay nodes adapt the same dominant cooperative strategy based on social comparison, without the need for external enforcement.",
"With the increasing of mobile devices, Device-to-Device(D2D) communication is considered as a promising technology for achieving direct communication of devices. But the security issues still remain to be solved. In this paper, We utilize D2D communication between UEs in proximity to share epidemic media contents. We propose a Secure Content Sharing Protocol(SCSP) that combine with the credibility between users. The credibility involves user's interest, interaction and geographical location. By means of encryption, signature and authentication in SCSP, the data confidentiality, non-repudiation and mutual authentication can be guaranteed. Furthermore, the user's privacy can be protected based on Pallier Homomorphic Encryption. At last, it is shown that the proposed SCSP can meet the security requirements and have lower computation cost in comparison with other related works through security analysis and quantitative evaluation."
]
} |
1708.04439 | 2747264656 | This paper proposes a text summarization approach for factual reports using a deep learning model. This approach consists of three phases: feature extraction, feature enhancement, and summary generation, which work together to assimilate core information and generate a coherent, understandable summary. We are exploring various features to improve the set of sentences selected for the summary, and are using a Restricted Boltzmann Machine to enhance and abstract those features to improve resultant accuracy without losing any important information. The sentences are scored based on those enhanced features and an extractive summary is constructed. Experimentation carried out on several articles demonstrates the effectiveness of the proposed approach. | Most early work on text summarization was focused on technical documents and early studies on summarization aimed at summarizing from pre-given documents without any other requirements, which is usually known as generic summarization @cite_11 . Luhn @cite_8 proposed that the frequency of a particular word in an article provides a useful measure of its significance. A number of key ideas, such as stemming and stop word filtering, were put forward in this paper that have now been understood as universal preprocessing steps to text analysis. Baxendale @cite_10 examined 200 paragraphs and found that in 85 Upcoming researchers in text summarization have approached it problem from many aspects such as natural language processing @cite_5 , statistical modelling @cite_12 and machine learning. While initially most machine learning systems assumed feature independence and relied on naive-Bayes methods, other recent ones have shifted focus to selection of appropriate features and learning algorithms that make no independence assumptions. Other significant approaches involved Hidden Markov Models and log-linear models to improve extractive summarization. More recent papers, in contrast, used neural networks towards this goal. | {
"cite_N": [
"@cite_8",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1974339500",
"199388453",
"2092246763",
"2104483432",
"1998369932"
],
"abstract": [
"Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \"auto-abstract.\"",
"Multi-document summarization is a fundamental tool for understanding documents. Given a collection of documents, most of existing multidocument summarization methods automatically generate a static summary for all the users using unsupervised learning techniques such as sentence ranking and clustering. However, these methods almost exclude human from the summarization process. They do not allow for user interaction and do not consider users' feedback which delivers valuable information and can be used as the guidance for summarization. Another limitation is that the generated summaries are displayed in textual format without visual representation. To address the above limitations, in this paper, we develop iDVS, a visualization-enabled multi-document summarization system with users' interaction, to improve the summarization performance using users' feedback and to assist users in document understanding using visualization techniques. In particular, iDVS uses a new semi-supervised document summarization method to dynamically select sentences based on users' interaction. To this regard, iDVS tightly integrates semi-supervised learning with interactive visualization for document summarization. Comprehensive experiments on multi-document summarization using benchmark datasets demonstrate the effectiveness of iDVS, and a user study is conducted to evaluate the users' satisfaction.",
"Machine techniques for reducing technical documents to their essential discriminating indices are investigated. Human scanning patterns in selecting \"topic sentences\" and phrases composed of nouns and modifiers were simulated by computer program. The amount of condensation resulting from each method and the relative uniformity in indices are examined. It is shown that the coordinated index provided by the phrase is the more meaningful and discriminating.",
"Statistical approaches to automatic text summarization based on term frequency continue to perform on par with more complex summarization methods. To compute useful frequency statistics, however, the semantically important words must be separated from the low-content function words. The standard approach of using an a priori stopword list tends to result in both undercoverage, where syntactical words are seen as semantically relevant, and overcoverage, where words related to content are ignored. We present a generative probabilistic modeling approach to building content distributions for use with statistical multi-document summarization where the syntax words are learned directly from the data with a Hidden Markov Model and are thereby deemphasized in the term frequency statistics. This approach is compared to both a stopword-list and POS-tagging approach and our method demonstrates improved coverage on the DUC 2006 and TAC 2010 datasets using the ROUGE metric.",
"This paper introduces a statistical model for query-relevant summarization: succinctly characterizing the relevance of a document to a query. Learning parameter values for the proposed model requires a large collection of summarized documents, which we do not have, but as a proxy, we use a collection of FAQ (frequently-asked question) documents. Taking a learning approach enables a principled, quantitative evaluation of the proposed system, and the results of some initial experiments---on a collection of Usenet FAQs and on a FAQ-like set of customer-submitted questions to several large retail companies---suggest the plausibility of learning for summarization."
]
} |
1708.04439 | 2747264656 | This paper proposes a text summarization approach for factual reports using a deep learning model. This approach consists of three phases: feature extraction, feature enhancement, and summary generation, which work together to assimilate core information and generate a coherent, understandable summary. We are exploring various features to improve the set of sentences selected for the summary, and are using a Restricted Boltzmann Machine to enhance and abstract those features to improve resultant accuracy without losing any important information. The sentences are scored based on those enhanced features and an extractive summary is constructed. Experimentation carried out on several articles demonstrates the effectiveness of the proposed approach. | Text Summarization can be done for one document, known as single-document summarization @cite_16 , or for multiple documents, known as multi-document summarization @cite_3 . On basis of the writing style of the final summary generated, text summarization techniques can be divided into extractive methodology and abstractive methodology @cite_0 . The objective of generating summaries via the extractive approach is choosing certain appropriate sentences as per the requirement of a user. Due to the idiosyncrasies of human-invented languages and grammar, extractive approaches, which select a subset of sentences from the input documents to form a summary instead of paraphrasing like a human @cite_14 , are the mainstream in the area. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_16",
"@cite_3"
],
"mid": [
"2102269292",
"2139832588",
"32253530",
"179757531"
],
"abstract": [
"It is difficult to identify sentence importance from a single point of view. In this paper, we propose a learning-based approach to combine various sentence features. They are categorized as surface, content, relevance and event features. Surface features are related to extrinsic aspects of a sentence. Content features measure a sentence based on content-conveying words. Event features represent sentences by events they contained. Relevance features evaluate a sentence from its relatedness with other sentences. Experiments show that the combined features improved summarization performance significantly. Although the evaluation results are encouraging, supervised learning approach requires much labeled data. Therefore we investigate co-training by combining labeled and unlabeled data. Experiments show that this semi-supervised learning approach achieves comparable performance to its supervised counterpart and saves about half of the labeling time cost.",
"We propose a new approach for recognizing object classes which is based on the intuitive idea that human beings are able to perform the task well given only thumbnails (coarse scale version) of images. Unlike previous work which uses local image features at fine scales, our approach uses thumbnails directly, and captures their high-order correlations at coarse scales through deep multi-layer neural networks based on restricted Boltzmann machines. Specifically, the pretraining stage of such networks takes on the role of feature extraction. Experimental results show that the proposed approach is comparable to other state-of-the-art recognition methods in terms of accuracy. The merits of the proposed approach come from the simplicity of the workflow and the parallelizability of the implementation structure.",
"Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"Many methods, including supervised and unsupervised algorithms, have been developed for extractive document summarization. Most supervised methods consider the summarization task as a two-class classification problem and classify each sentence individually without leveraging the relationship among sentences. The unsupervised methods use heuristic rules to select the most informative sentences into a summary directly, which are hard to generalize. In this paper, we present a Conditional Random Fields (CRF) based framework to keep the merits of the above two kinds of approaches while avoiding their disadvantages. What is more, the proposed framework can take the outcomes of previous methods as features and seamlessly integrate them. The key idea of our approach is to treat the summarization task as a sequence labeling problem. In this view, each document is a sequence of sentences and the summarization procedure labels the sentences by 1 and 0. The label of a sentence depends on the assignment of labels of others. We compared our proposed approach with eight existing methods on an open benchmark data set. The results show that our approach can improve the performance by more than 7.1 and 12.1 over the best supervised baseline and unsupervised baseline respectively in terms of two popular metrics F1 and ROUGE-2. Detailed analysis of the improvement is presented as well."
]
} |
1708.04051 | 2749779288 | In physical layer security (PHY-security), the frequently observed high correlation between the main and wiretap channels can cause a significant loss of secrecy. This paper investigates a slow fading scenario, where a transmitter (Alice) sends a confidential message to a legitimate receiver (Bob) while a passive eavesdropper (Eve) attempts to decode the message from its received signal. It is assumed that Alice is equipped with multiple antennas while Bob and Eve each have a single antenna (i.e., a MISOSE system). In a MISOSE system, high correlation results in nearly collinear main and wiretap channel vectors, which help Eve to see and intercept confidential information. Unfortunately, the signal processing techniques at Alice, such as beamforming and artificial noise (AN), are helpless, especially in the extreme case of completely collinear main and wiretap channel vectors. On this background, we first investigate the achievable secrecy outage probability via beamforming and AN at Alice with the optimal power allocation between the information-bearing signal and AN. Then, an ingenious model, in which a cooperative jamming relay (Relay) is introduced, is proposed to effectively mitigate the adverse effects of high correlation. Based on the proposed model, the power allocation between the information-bearing signal at Alice and the AN at Relay is also studied to maximize secrecy. Finally, to validate our proposed schemes, numerical simulations are conducted, and the results show that a significant performance gain with respect to secrecy is achieved. | In PHY-security, beamforming and precoding techniques at Alice can enhance the signal quality at Bob while limiting the signal strength at Eve. In addition, AN inserted into the transmitted signal can degrade the reception at Eve and consequently further increase the signal quality difference at Bob and Eve. @cite_14 briefly classifies these techniques into four categories, namely, covering beamforming, ZF precoding, convex (CVX)-based precoding, and AN precoding. However, when the main and eavesdropper channels are highly correlated, the signal processing techniques appear to be powerless. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2509004749"
],
"abstract": [
"Physical layer security (PHY-security) takes the advantages of channel randomness nature of transmission media to achieve communication confidentiality and authentication. Wiretap coding and signal processing technologies are expected to play vital roles in this new security mechanism. PHY-security has attracted a lot of attention due to its unique features and the fact that our daily life relies heavily on wireless communications for sensitive and private information transmissions. Compared to conventional cryptography that works to ensure all involved entities to load proper and authenticated cryptographic information, PHY-security technologies perform security functions without considering about how those security protocols are executed. In other words, it does not require to implement any extra security schemes or algorithms on other layers above the physical layer. This survey introduces the fundamental theories of PHY-security, covering confidentiality and authentication, and provides an overview on the state-of-the-art works on PHY-security technologies that can provide secure communications in wireless systems, along with the discussions on challenges and their proposed solutions. Furthermore, at the end of this paper, the open issues are identified as our future research directions."
]
} |
1708.04051 | 2749779288 | In physical layer security (PHY-security), the frequently observed high correlation between the main and wiretap channels can cause a significant loss of secrecy. This paper investigates a slow fading scenario, where a transmitter (Alice) sends a confidential message to a legitimate receiver (Bob) while a passive eavesdropper (Eve) attempts to decode the message from its received signal. It is assumed that Alice is equipped with multiple antennas while Bob and Eve each have a single antenna (i.e., a MISOSE system). In a MISOSE system, high correlation results in nearly collinear main and wiretap channel vectors, which help Eve to see and intercept confidential information. Unfortunately, the signal processing techniques at Alice, such as beamforming and artificial noise (AN), are helpless, especially in the extreme case of completely collinear main and wiretap channel vectors. On this background, we first investigate the achievable secrecy outage probability via beamforming and AN at Alice with the optimal power allocation between the information-bearing signal and AN. Then, an ingenious model, in which a cooperative jamming relay (Relay) is introduced, is proposed to effectively mitigate the adverse effects of high correlation. Based on the proposed model, the power allocation between the information-bearing signal at Alice and the AN at Relay is also studied to maximize secrecy. Finally, to validate our proposed schemes, numerical simulations are conducted, and the results show that a significant performance gain with respect to secrecy is achieved. | Another common technique to improve confidential transmission is relay systems, which can provide additional spatial degrees of freedom through the antennas at the relays. In PHY-security, relays are usually employed to forward data to Bob or to emit AN or jamming signals to disrupt reception at Eve @cite_15 @cite_2 @cite_10 @cite_3 @cite_20 @cite_18 @cite_16 @cite_19 . Moreover, relay systems can use full duplex to improve secrecy @cite_8 @cite_9 @cite_13 . However, these schemes focus on independent or weakly correlated wiretap channel models where channel correlation is not considered. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2124431228",
"2171006634",
"2143370304",
"2052245931",
"1972785130",
"2104478100",
"2097480027",
"",
"2096151870",
"2032278718",
"2124071994"
],
"abstract": [
"This paper investigates the secrecy performance of full-duplex relay (FDR) networks. The resulting analysis shows that FDR networks have better secrecy performance than half duplex relay networks, if the self-interference can be well suppressed. We also propose a full duplex jamming relay network, in which the relay node transmits jamming signals while receiving the data from the source. While the full duplex jamming scheme has the same data rate as the half duplex scheme, the secrecy performance can be significantly improved, making it an attractive scheme when the network secrecy is a primary concern. A mathematic model is developed to analyze secrecy outage probabilities for the half duplex, the full duplex and full duplex jamming schemes, and the simulation results are also presented to verify the analysis.",
"Physical (PHY) layer security approaches for wireless communications can prevent eavesdropping without upper layer data encryption. However, they are hampered by wireless channel conditions: absent feedback, they are typically feasible only when the source-destination channel is better than the source-eavesdropper channel. Node cooperation is a means to overcome this challenge and improve the performance of secure wireless communications. This paper addresses secure communications of one source-destination pair with the help of multiple cooperating relays in the presence of one or more eavesdroppers. Three cooperative schemes are considered: decode-and-forward (DF), amplify-and-forward (AF), and cooperative jamming (CJ). For these schemes, the relays transmit a weighted version of a reencoded noise-free message signal (for DF), a received noisy source signal (for AF), or a common jamming signal (for CJ). Novel system designs are proposed, consisting of the determination of relay weights and the allocation of transmit power, that maximize the achievable secrecy rate subject to a transmit power constraint, or, minimize the transmit power subject to a secrecy rate constraint. For DF in the presence of one eavesdropper, closed-form optimal solutions are derived for the relay weights. For other problems, since the optimal relay weights are difficult to obtain, several criteria are considered leading to suboptimal but simple solutions, i.e., the complete nulling of the message signals at all eavesdroppers (for DF and AF), or the complete nulling of jamming signal at the destination (for CJ). Based on the designed relay weights, for DF in the presence of multiple eavesdroppers, and for CJ in the presence of one eavesdropper, the optimal power allocation is obtained in closed-form; in all other cases the optimal power allocation is obtained via iterative algorithms. Numerical evaluation of the obtained secrecy rate and transmit power results show that the proposed design can significantly improve the performance of secure wireless communications.",
"We assume a full-duplex (FD) cooperative network subject to hostile attacks and undergoing composite fading channels. We focus on two scenarios: a) the transmitter has full CSI, for which we derive closed-form expressions for the average secrecy rate; and b) the transmitter only knows the CSI of the legitimate nodes, for which we obtain closed-form expressions for the secrecy outage probability. We show that secure FD relaying is feasible, even under strong self-interference and in the presence of sophisticated multiple antenna eavesdropper.",
"In this letter, we consider secure communications in multi-hop relaying systems, where full-duplex relays (FDRs) operate to enhance wireless physical layer security. Each FDR is designed to transmit jamming signals to the eavesdropper when it receives information signals from the previous adjacent node. The achievable secrecy rate with the proposed decode-and-forward (DF) FDRs are analyzed with a total transmit power constraint. The transmit power allocation problem is solved by using the geometric programming (GP) method. Numerical results present that the proposed FDRs significantly enhance the secrecy rate compared to the conventional half-duplex relays (HDRs).",
"In this paper, we consider a secrecy relaying communication scenario where all nodes are equipped with multiple antennas. An eavesdropper has the access to the global channel state information (CSI), and all the other nodes only know the CSI not associated with the eavesdropper. A new secrecy transmission protocol is proposed, where the concept of interference alignment is combined with cooperative jamming to ensure that artificial noise from transmitters can be aligned at the destination, but not at the eavesdropper due to the randomness of wireless channels. Analytical results, such as ergodic secrecy rate and outage probability, are developed, from which more insightful understanding of the proposed protocol, such as multiplexing and diversity gains, can be obtained. A few special cases, where outage probability cannot be decreased to zero regardless of SNR, are also discussed. Simulation results are provided to demonstrate the performance of the proposed secrecy transmission protocol.",
"This correspondence studies cooperative jamming (CJ) to increase the physical layer security of a wiretap fading channel via distributed relays. We first provide the feasible conditions on the positiveness of the secrecy rate and then show that the optimization problem can be solved using a combination of convex optimization and a one-dimensional search. Distributed implementation to realize the CJ solution and extension to deal with per group relays' power constraints are discussed.",
"We consider the communication scenario where a source-destination pair wishes to keep the information secret from a relay node despite wanting to enlist its help. For this scenario, an interesting question is whether the relay node should be deployed at all. That is, whether cooperation with an untrusted relay node can ever be beneficial. We first provide an achievable secrecy rate for the general untrusted relay channel, and proceed to investigate this question for two types of relay networks with orthogonal components. For the first model, there is an orthogonal link from the source to the relay. For the second model, there is an orthogonal link from the relay to the destination. For the first model, we find the equivocation capacity region and show that answer is negative. In contrast, for the second model, we find that the answer is positive. Specifically, we show, by means of the achievable secrecy rate based on compress-and-forward, that by asking the untrusted relay node to relay information, we can achieve a higher secrecy rate than just treating the relay as an eavesdropper. For a special class of the second model, where the relay is not interfering itself, we derive an upper bound for the secrecy rate using an argument whose net effect is to separate the eavesdropper from the relay. The merit of the new upper bound is demonstrated on two channels that belong to this special class. The Gaussian case of the second model mentioned above benefits from this approach in that the new upper bound improves the previously known bounds. For the Cover-Kim deterministic relay channel, the new upper bound finds the secrecy capacity when the source-destination link is not worse than the source-relay link, by matching with achievable rate we present.",
"",
"We consider a cooperative wireless network in the presence of one or more eavesdroppers, and exploit node cooperation for achieving physical (PHY) layer based security. Two different cooperation schemes are considered. In the first scheme, cooperating nodes retransmit a weighted version of the source signal in a decode-and-forward (DF) fashion. In the second scheme, referred to as cooperative jamming (CJ), while the source is transmitting, cooperating nodes transmit weighted noise to confound the eavesdropper. We investigate two objectives: i) maximization of the achievable secrecy rate subject to a total power constraint and ii) minimization of the total power transmit power under a secrecy rate constraint. For the first design objective, we obtain the exact solution for the DF scheme for the case of a single or multiple eavasdroppers, while for the CJ scheme with a single eavesdropper we reduce the multivariate problem to a problem of one variable. For the second design objective, existing work introduces additional constraints in order to reduce the degree of difficulty, thus resulting in suboptimal solutions. Our work raises those constraints, and obtains either an analytical solution for the DF scheme with a single eavesdropper, or reduces the multivariate problem to a problem of one variable for the CJ scheme with a single eavesdropper. Numerical results are presented to illustrate the proposed results and compare them to existing work.",
"An amplify-and-forward (AF) multiple-input multiple-output (MIMO) relay network composed of a source, a relay, and a destination is considered, where transmit beamforming is employed both at the source and at the relay. The relay is a user who is willing to help the communication from the source to the destination. In our paper, however, the relay is untrusted in the sense that it may make a passive security attack; that is, it may decode messages of the source. We consider two ways to transmit confidential information of the source to the destination: noncooperative secure beamforming and cooperative secure beamforming. In the noncooperative scheme, the relay is simply treated as an eavesdropper, and does not participate in communication. In the cooperative scheme, the relay is asked to relay signals from the source to the destination. In this paper, the source and relay beamforming is jointly designed to maximize the secrecy rate in the cooperative scheme. The conditions under which the cooperative scheme achieves a higher secrecy rate than the noncooperative scheme are derived in the low and high signal-to-noise ratio (SNR) regimes of the source-relay and relay-destination links. The performance of the secure beamforming schemes is compared through extensive numerical simulations.",
"Secure communications can be impeded by eavesdroppers in conventional relay systems. This paper proposes cooperative jamming strategies for two-hop relay networks where the eavesdropper can wiretap the relay channels in both hops. In these approaches, the normally inactive nodes in the relay network can be used as cooperative jamming sources to confuse the eavesdropper. Linear precoding schemes are investigated for two scenarios where single or multiple data streams are transmitted via a decode-and-forward (DF) relay, under the assumption that global channel state information (CSI) is available. For the case of single data stream transmission, we derive closed-form jamming beamformers and the corresponding optimal power allocation. Generalized singular value decomposition (GSVD)-based secure relaying schemes are proposed for the transmission of multiple data streams. The optimal power allocation is found for the GSVD relaying scheme via geometric programming. Based on this result, a GSVD-based cooperative jamming scheme is proposed that shows significant improvement in terms of secrecy rate compared to the approach without jamming. Furthermore, the case involving an eavesdropper with unknown CSI is also investigated in this paper. Simulation results show that the secrecy rate is dramatically increased when inactive nodes in the relay network participate in cooperative jamming."
]
} |
1708.04051 | 2749779288 | In physical layer security (PHY-security), the frequently observed high correlation between the main and wiretap channels can cause a significant loss of secrecy. This paper investigates a slow fading scenario, where a transmitter (Alice) sends a confidential message to a legitimate receiver (Bob) while a passive eavesdropper (Eve) attempts to decode the message from its received signal. It is assumed that Alice is equipped with multiple antennas while Bob and Eve each have a single antenna (i.e., a MISOSE system). In a MISOSE system, high correlation results in nearly collinear main and wiretap channel vectors, which help Eve to see and intercept confidential information. Unfortunately, the signal processing techniques at Alice, such as beamforming and artificial noise (AN), are helpless, especially in the extreme case of completely collinear main and wiretap channel vectors. On this background, we first investigate the achievable secrecy outage probability via beamforming and AN at Alice with the optimal power allocation between the information-bearing signal and AN. Then, an ingenious model, in which a cooperative jamming relay (Relay) is introduced, is proposed to effectively mitigate the adverse effects of high correlation. Based on the proposed model, the power allocation between the information-bearing signal at Alice and the AN at Relay is also studied to maximize secrecy. Finally, to validate our proposed schemes, numerical simulations are conducted, and the results show that a significant performance gain with respect to secrecy is achieved. | On the other hand, several works address high correlation. @cite_17 propose that secrecy can be enhanced by opportunistically transmitting messages in time slots instead of using excessively large signal power. In particular, confidential transmission occurs when the main channel has better instantaneous channel gain than that of the eavesdropper channel. To maximize the secrecy, power is allocated through a water-filling strategy in the time domain, which states that more power is transmitted in time slots when the channel exhibits high SNR and less power is sent in time slots with poor SNR. Obviously, @cite_17 does not eliminate the fundamental problem caused by high correlation by improving only the usage efficiency of the transmitted power. Additionally, for delay-limited applications, the encoding over multiple channel states adopted in @cite_17 may not be acceptable since it may incur long delays @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2096151870",
"2106938400"
],
"abstract": [
"We consider a cooperative wireless network in the presence of one or more eavesdroppers, and exploit node cooperation for achieving physical (PHY) layer based security. Two different cooperation schemes are considered. In the first scheme, cooperating nodes retransmit a weighted version of the source signal in a decode-and-forward (DF) fashion. In the second scheme, referred to as cooperative jamming (CJ), while the source is transmitting, cooperating nodes transmit weighted noise to confound the eavesdropper. We investigate two objectives: i) maximization of the achievable secrecy rate subject to a total power constraint and ii) minimization of the total power transmit power under a secrecy rate constraint. For the first design objective, we obtain the exact solution for the DF scheme for the case of a single or multiple eavasdroppers, while for the CJ scheme with a single eavesdropper we reduce the multivariate problem to a problem of one variable. For the second design objective, existing work introduces additional constraints in order to reduce the degree of difficulty, thus resulting in suboptimal solutions. Our work raises those constraints, and obtains either an analytical solution for the DF scheme with a single eavesdropper, or reduces the multivariate problem to a problem of one variable for the CJ scheme with a single eavesdropper. Numerical results are presented to illustrate the proposed results and compare them to existing work.",
"We investigate the secrecy capacity of an ergodic fading wiretap channel when the main and eavesdropper channels are correlated. Assuming that the transmitter knows the full channel state information (CSI) (i.e., the channel gains from the transmitter to the legitimate receiver and eavesdropper), we quantify the loss of the secrecy capacity due to the correlation and investigate the asymptotic behavior of the secrecy capacity in the high signal-to-noise ratio (SNR) regime. While the ergodic capacity of fading channels grows logarithmically with SNR in general, we have found that the secrecy capacity converges to an upper-bound (a closed-form expression is derived) that will be shown to be a function of two channel parameters; the correlation coefficient and the ratio of the main to eavesdropper channel gains. From this, we are able to see how the two channel parameters affect the secrecy capacity and conclude that the excessively large signal power does not help to improve the secrecy capacity and the loss due to the correlation could be significant especially when the ratio of the main to eavesdropper channel gains is low."
]
} |
1708.04169 | 2748580836 | As re-ranking is a necessary procedure to boost person re-identification (re-ID) performance on large-scale datasets, the diversity of feature becomes crucial to person reID for its importance both on designing pedestrian descriptions and re-ranking based on feature fusion. However, in many circumstances, only one type of pedestrian feature is available. In this paper, we propose a "Divide and use" re-ranking framework for person re-ID. It exploits the diversity from different parts of a high-dimensional feature vector for fusion-based re-ranking, while no other features are accessible. Specifically, given an image, the extracted feature is divided into sub-features. Then the contextual information of each sub-feature is iteratively encoded into a new feature. Finally, the new features from the same image are fused into one vector for re-ranking. Experimental results on two person re-ID benchmarks demonstrate the effectiveness of the proposed framework. Especially, our method outperforms the state-of-the-art on the Market-1501 dataset. | The re-ranking technique @cite_32 is generally used as a post-processing step in various retrieval problems, where the initial ranking list for a query is sorted based on the pairwise similarities between the query and the instances in the database. The re-ranking procedure refines the initial ranking list by taking account of the neighborhood relations among all the instances. A variety of re-ranking algorithms have been developed for object retrieval. In particular, Sparse Contextual Activation (SCA) @cite_19 encodes the neighborhood set into a sparse vector and measures the sample dissimilarity in generalized Jaccard distance. Bai al @cite_3 provide theoretical explanations for diffusion process, which is a popular branch of re-ranking. | {
"cite_N": [
"@cite_19",
"@cite_32",
"@cite_3"
],
"mid": [
"2242818826",
"2013808584",
""
],
"abstract": [
"In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.",
"The explosive growth and widespread accessibility of community-contributed media content on the Internet have led to a surge of research activity in multimedia search. Approaches that apply text search techniques for multimedia search have achieved limited success as they entirely ignore visual content as a ranking signal. Multimedia search reranking, which reorders visual documents based on multimodal cues to improve initial text-only searches, has received increasing attention in recent years. Such a problem is challenging because the initial search results often have a great deal of noise. Discovering knowledge or visual patterns from such a noisy ranked list to guide the reranking process is difficult. Numerous techniques have been developed for visual search re-ranking. The purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. We conclude with several promising directions for future research.",
""
]
} |
1708.03950 | 2747181576 | Given a high-dimensional data matrix @math , Approximate Message Passing (AMP) algorithms construct sequences of vectors @math , @math , indexed by @math by iteratively applying @math or @math , and suitable non-linear functions, which depend on the specific application. Special instances of this approach have been developed --among other applications-- for compressed sensing reconstruction, robust regression, Bayesian estimation, low-rank matrix recovery, phase retrieval, and community detection in graphs. For certain classes of random matrices @math , AMP admits an asymptotically exact description in the high-dimensional limit @math , which goes under the name of state evolution.' Earlier work established state evolution for separable non-linearities (under certain regularity conditions). Nevertheless, empirical work demonstrated several important applications that require non-separable functions. In this paper we generalize state evolution to Lipschitz continuous non-separable nonlinearities, for Gaussian matrices @math . Our proof makes use of Bolthausen's conditioning technique along with several approximation arguments. In particular, we introduce a modified algorithm (called LAMP for Long AMP) which is of independent interest. | Finally, a recent paper by Ma, Rush and Baron @cite_32 states a theorem establishing state evolution for compressed sensing reconstruction via AMP with a non-separable sliding-window denoiser. The result of @cite_32 is not directly comparable with ours, since it concerns a special class of non-separable nonlinearities, but provides non-asymptotic guarantees. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2612457931"
],
"abstract": [
"Approximate message passing (AMP) is a class of efficient algorithms for solving high-dimensional linear regression tasks where one wishes to recover an unknown signal from noisy, linear measurements y = A + w. When applying a separable denoiser at each iteration, the performance of AMP (for example, the mean squared error of its estimates) can be accurately tracked by a simple, scalar iteration referred to as state evolution. Although separable denoisers are sufficient if the unknown signal has independent and identically distributed entries, in many real-world applications, like image or audio signal reconstruction, the unknown signal contains dependencies between entries. In these cases, a coordinate-wise independence structure is not a good approximation to the true prior of the unknown signal. In this paper we assume the unknown signal has dependent entries, and using a class of non-separable sliding-window denoisers, we prove that a new form of state evolution still accurately predicts AMP performance. This is an early step in understanding the role of non-separable denoisers within AMP, and will lead to a characterization of more general denoisers in problems including compressive image reconstruction."
]
} |
1708.03978 | 2750552183 | Biometric techniques are often used as an extra security factor in authenticating human users. Numerous biometrics have been proposed and evaluated, each with its own set of benefits and pitfalls. Static biometrics (such as fingerprints) are geared for discrete operation, to identify users, which typically involves some user burden. Meanwhile, behavioral biometrics (such as keystroke dynamics) are well suited for continuous, and sometimes more unobtrusive, operation. One important application domain for biometrics is deauthentication, a means of quickly detecting absence of a previously authenticated user and immediately terminating that user's active secure sessions. Deauthentication is crucial for mitigating so called Lunchtime Attacks, whereby an insider adversary takes over (before any inactivity timeout kicks in) authenticated state of a careless user who walks away from her computer. Motivated primarily by the need for an unobtrusive and continuous biometric to support effective deauthentication, we introduce PoPa, a new hybrid biometric based on a human user's seated posture pattern. PoPa captures a unique combination of physiological and behavioral traits. We describe a low cost fully functioning prototype that involves an office chair instrumented with 16 tiny pressure sensors. We also explore (via user experiments) how PoPa can be used in a typical workplace to provide continuous authentication (and deauthentication) of users. We experimentally assess viability of PoPa in terms of uniqueness by collecting and evaluating posture patterns of a cohort of users. Results show that PoPa exhibits very low false positive, and even lower false negative, rates. In particular, users can be identified with, on average, 91.0 accuracy. Finally, we compare pros and cons of PoPa with those of several prominent biometric based deauthentication techniques. | @cite_22 propose wearing a bracelet that has a gyroscope and an accelerometer for continuous authentication. When the user interacts with the computer (e.g., typing or scrolling), the bracelet transfers collected sensor data to the computer, which evaluates whether user actions match sensor data. The proposed system, ZEBRA, achieves continuous authentication with 85 detects attacks within @math seconds. However, a recent study by @cite_13 presents a set of credible attacks on ZEBRA. | {
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"1831911546",
"2079024329"
],
"abstract": [
"Deauthentication is an important component of any authentication system. The widespread use of computing devices in daily life has underscored the need for zero-effort deauthentication schemes. However, the quest for eliminating user effort may lead to hidden security flaws in the authentication schemes. As a case in point, we investigate a prominent zero-effort deauthentication scheme, called ZEBRA, which provides an interesting and a useful solution to a difficult problem as demonstrated in the original paper. We identify a subtle incorrect assumption in its adversary model that leads to a fundamental design flaw. We exploit this to break the scheme with a class of attacks that are much easier for a human to perform in a realistic adversary model, compared to the na \"ive attacks studied in the ZEBRA paper. For example, one of our main attacks, where the human attacker has to opportunistically mimic only the victim's keyboard typing activity at a nearby terminal, is significantly more successful compared to the na \"ive attack that requires mimicking keyboard and mouse activities as well as keyboard-mouse movements. Further, by understanding the design flaws in ZEBRA as cases of tainted input, we show that we can draw on well-understood design principles to improve ZEBRA's security.",
"Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same - the user's hand movement. In our experiments ZEBRA performed continuous authentication with 85 accuracy in verifying the correct user and identified all adversaries within 11s. For a different threshold that trades security for usability, ZEBRA correctly verified 90 of users and identified all adversaries within 50s."
]
} |
1708.03978 | 2750552183 | Biometric techniques are often used as an extra security factor in authenticating human users. Numerous biometrics have been proposed and evaluated, each with its own set of benefits and pitfalls. Static biometrics (such as fingerprints) are geared for discrete operation, to identify users, which typically involves some user burden. Meanwhile, behavioral biometrics (such as keystroke dynamics) are well suited for continuous, and sometimes more unobtrusive, operation. One important application domain for biometrics is deauthentication, a means of quickly detecting absence of a previously authenticated user and immediately terminating that user's active secure sessions. Deauthentication is crucial for mitigating so called Lunchtime Attacks, whereby an insider adversary takes over (before any inactivity timeout kicks in) authenticated state of a careless user who walks away from her computer. Motivated primarily by the need for an unobtrusive and continuous biometric to support effective deauthentication, we introduce PoPa, a new hybrid biometric based on a human user's seated posture pattern. PoPa captures a unique combination of physiological and behavioral traits. We describe a low cost fully functioning prototype that involves an office chair instrumented with 16 tiny pressure sensors. We also explore (via user experiments) how PoPa can be used in a typical workplace to provide continuous authentication (and deauthentication) of users. We experimentally assess viability of PoPa in terms of uniqueness by collecting and evaluating posture patterns of a cohort of users. Results show that PoPa exhibits very low false positive, and even lower false negative, rates. In particular, users can be identified with, on average, 91.0 accuracy. Finally, we compare pros and cons of PoPa with those of several prominent biometric based deauthentication techniques. | Finally, @cite_2 investigated how to use fewer sensors to detect posture. To determine optimal sensor placement, a classifier is constructed that learns the probabilistic model between the chosen subset of sensor values and feature vectors used for posture classification. With @math sensors, classification accuracy of 87 sensor placement strategy. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2160904077"
],
"abstract": [
"In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87 for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78 classification accuracy using data from 19sensors)."
]
} |
1708.03995 | 2746751949 | Word embeddings are representations of individual words of a text document in a vector space and they are often use- ful for performing natural language pro- cessing tasks. Current state of the art al- gorithms for learning word embeddings learn vector representations from large corpora of text documents in an unsu- pervised fashion. This paper introduces SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via word embeddings. SWESA leverages document label infor- mation to learn vector representations of words from a modest corpus of text doc- uments by solving an optimization prob- lem that minimizes a cost function with respect to both word embeddings as well as classification accuracy. Analysis re- veals that SWESA provides an efficient way of estimating the dimension of the word embeddings that are to be learned. Experiments on several real world data sets show that SWESA has superior per- formance when compared to previously suggested approaches to word embeddings and sentiment analysis tasks. | Latent variable probabilistic models @cite_5 @cite_13 and extensions have also been used for word embeddings. All of the above methods learn word embeddings in an unsupervised fashion. However, using labeled data can often help with learning sentiment-aware word embeddings more appropriate to the corpus at hand. Such word embeddings can be used in sentiment analysis tasks. | {
"cite_N": [
"@cite_5",
"@cite_13"
],
"mid": [
"1880262756",
"2138107145"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models."
]
} |
1708.03995 | 2746751949 | Word embeddings are representations of individual words of a text document in a vector space and they are often use- ful for performing natural language pro- cessing tasks. Current state of the art al- gorithms for learning word embeddings learn vector representations from large corpora of text documents in an unsu- pervised fashion. This paper introduces SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via word embeddings. SWESA leverages document label infor- mation to learn vector representations of words from a modest corpus of text doc- uments by solving an optimization prob- lem that minimizes a cost function with respect to both word embeddings as well as classification accuracy. Analysis re- veals that SWESA provides an efficient way of estimating the dimension of the word embeddings that are to be learned. Experiments on several real world data sets show that SWESA has superior per- formance when compared to previously suggested approaches to word embeddings and sentiment analysis tasks. | In their work @cite_12 propose a probabilistic model that captures semantic similarities among words across documents. This model leverages document label information to improve word vectors to better capture sentiment of the contexts in which these words occur. The probabilistic model used by is similar to that in Latent Dirichlet Allocation (LDA) @cite_5 in which each document is modeled as a mixture of latent topics. @cite_12 , word probabilities in a document are modeled directly assuming a given topic. | {
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"1880262756",
"2113459411"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area."
]
} |
1708.03995 | 2746751949 | Word embeddings are representations of individual words of a text document in a vector space and they are often use- ful for performing natural language pro- cessing tasks. Current state of the art al- gorithms for learning word embeddings learn vector representations from large corpora of text documents in an unsu- pervised fashion. This paper introduces SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via word embeddings. SWESA leverages document label infor- mation to learn vector representations of words from a modest corpus of text doc- uments by solving an optimization prob- lem that minimizes a cost function with respect to both word embeddings as well as classification accuracy. Analysis re- veals that SWESA provides an efficient way of estimating the dimension of the word embeddings that are to be learned. Experiments on several real world data sets show that SWESA has superior per- formance when compared to previously suggested approaches to word embeddings and sentiment analysis tasks. | A supervised neural network based model has been proposed by @cite_21 to classify Twitter data. The proposed algorithm learns sentiment specific word vectors, from tweets making use of emoticons in text to guide sentiment of words used in the text instead of annotated sentiment labels. The Recursive Neural Tensor Network (RNTN) proposed by @cite_18 classifies sentiment of text of varying length. To learn sentiment from long text, this model exploits compositionality in text by converting input text into the Sentiment Treebank format with annotated sentiment labels. The Sentiment Treebank is based on a data set introduced by Pang and Lee @cite_11 . This model performs particularly well on longer texts by exploiting compositionality as opposed to a regular bag of features approach. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_11"
],
"mid": [
"2251939518",
"",
"2952186591"
],
"abstract": [
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"",
"We address the rating-inference problem, wherein rather than simply decide whether a review is \"thumbs up\" or \"thumbs down\", as in previous sentiment analysis work, one must determine an author's evaluation with respect to a multi-point scale (e.g., one to five \"stars\"). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, \"three stars\" is intuitively closer to \"four stars\" than to \"one star\". We first evaluate human performance at the task. Then, we apply a meta-algorithm, based on a metric labeling formulation of the problem, that alters a given n-ary classifier's output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem."
]
} |
1708.03958 | 2746131160 | Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities. | In light of this, Convolutional 3D (C3D) is proposed in @cite_18 to learn 3D convolution kernels in both space and time based on a straightforward extension of the established 2D CNNs. However, when filtering the video clips using 3D kernels, C3D only covers a short range of the sequence. @cite_20 incorporate motion information by training another neural network on optical flow @cite_30 . Taking advantage of the appearance and flow features, the accuracy of action recognition is significantly boosted, even by simply fusing probability scores. Since optical flow contains only short-term motion information, adding it does not enable CNNs to learn long-term motion transitions. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_20"
],
"mid": [
"1578985305",
"2952633803",
"2952186347"
],
"abstract": [
"Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly appealing formulation is based on total variation (TV) regularization and the robust L1 norm in the data fidelity term. This formulation can preserve discontinuities in the flow field and offers an increased robustness against illumination changes, occlusions and noise. In this work we present a novel approach to solve the TV-L1 formulation. Our method results in a very efficient numerical scheme, which is based on a dual formulation of the TV energy and employs an efficient point-wise thresholding step. Additionally, our approach can be accelerated by modern graphics processing units. We demonstrate the real-time performance (30 fps) of our approach for video inputs at a resolution of 320 × 240 pixels.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification."
]
} |
1708.03958 | 2746131160 | Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities. | Several attempts have been made to obtain a better combination of appearance and motion in order to improve recognition accuracy. @cite_5 try to extract the spatial-temporal features using a sequential procedure, namely, 2D spatial information extraction followed by 1D temporal information extraction. This end-to-end system considers the short (frame difference) and long (RGB frames with strides) motion patterns and achieves good performance. @cite_4 study a number of ways of fusing CNN towers both spatially and temporally in order to take advantage of this spatio-temporal information from the appearance and optical flow networks. They propose a novel architecture @cite_0 generalizing residual networks (ResNets) @cite_14 to the spatio-temporal domain as well. However, a CNN based method cannot accurately model the dynamics by simply averaging the scores across the time domain, even if the appearance features already achieve remarkable performance on other computer vision tasks. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_14",
"@cite_4"
],
"mid": [
"2952005526",
"2190635018",
"2949650786",
"2342662179"
],
"abstract": [
"Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results."
]
} |
1708.03958 | 2746131160 | Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities. | In order to model the dynamics between frames, recurrent neural networks (RNNs), particularly long short-term memory (LSTM), have been considered for video based human action recognition. LSTM units, first proposed in @cite_16 , are recurrent modules which have the capability of learning long-term dependencies using a hidden state augmented with nonlinear mechanisms to allow the state to propagate without modification. LSTMs use multiplicative gates to control access to the error signal propagating through the networks, alleviating the short memory in RNNs @cite_39 . @cite_24 extends LSTM to ConvLSTM, which considers the neighboring pixels' relations in the spatial domain. ConvLSTM can learn spatial patterns along the temporal domain. | {
"cite_N": [
"@cite_24",
"@cite_16",
"@cite_39"
],
"mid": [
"1485009520",
"2064675550",
"2147568880"
],
"abstract": [
"The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"The temporal distance between events conveys information essential for numerous sequential tasks such as motor control and rhythm detection. While Hidden Markov Models tend to ignore this information, recurrent neural networks (RNNs) can in principle learn to make use of it. We focus on Long Short-Term Memory (LSTM) because it has been shown to outperform other RNNs on tasks involving long time lags. We find that LSTM augmented by \"peephole connections\" from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes spaced either 50 or 49 time steps apart without the help of any short training exemplars. Without external resets or teacher forcing, our LSTM variant also learns to generate stable streams of precisely timed spikes and other highly nonlinear periodic patterns. This makes LSTM a promising approach for tasks that require the accurate measurement or generation of time intervals."
]
} |
1708.03958 | 2746131160 | Human actions captured in video sequences are three-dimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short-Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long. In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and or CNNs of similar model complexities. | To address this, in VideoLSTM @cite_1 convolutions are hardwired in the soft-Attention LSTM @cite_12 . By stacking another RNN for motion modeling, an enhanced version of attention model is assembled. The input of ConvLSTM, @math , becomes @math by element-wise multiplying the attention map @math . However, this complex architecture does not bring significant performance improvement. In fact, the performance is highly dependent on the iDT features @cite_8 . In other words, VideoLSTM does not characterize the motion dynamics well, even when several attention models are stacked within on the LSTM. | {
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_8"
],
"mid": [
"",
"2172806452",
"2105101328"
],
"abstract": [
"",
"We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art."
]
} |
1708.04146 | 2518952733 | The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method. | Video Summarization methods can capture the essential information of the video and create a shorter version, thus the amount of time necessary to interpret the video content can be reduced @cite_19 @cite_18 . The summarization methods are basically divided into two approaches: static storyboard or still-image abstract, where the most representative keyframes are selected to represent the video as a whole @cite_12 , @cite_14 and; dynamic video skimming or moving-image abstract, where a series of video clips compose the output @cite_11 , @cite_4 . Despite the large number of video summarization techniques proposed over the past years, only few works address summarization on egocentric videos @cite_14 , @cite_7 , @cite_0 , @cite_2 . Besides video summarization techniques aim to keep semantic information, it cannot give a temporal perception of the video, because some parts of the input video are completely removed @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2106229755",
"2144139464",
"2120645068",
"",
"2098370286",
"2529272619",
"1948812921",
"2012754317",
"2109152179"
],
"abstract": [
"",
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"The authors propose a novel technique for video summarization based on singular value decomposition (SVD). For the input video sequence, we create a feature-frame matrix A, and perform the SVD on it. From this SVD, we are able, to not only derive the refined feature space to better cluster visually similar frames, but also define a metric to measure the amount of visual content contained in each frame cluster using its degree of visual changes. Then, in the refined feature space, we find the most static frame cluster, define it as the content unit, and use the context value computed from it as the threshold to cluster the rest of the frames. Based on this clustering result, either the optimal set of keyframes, or a summarized motion video with the user specified time length can be generated to support different user requirements for video browsing and content overview. Our approach ensures that the summarized video representation contains little redundancy, and gives equal attention to the same amount of contents.",
"We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"",
"In this paper, we present a semantic summarization algorithm that interfaces with the metadata and that works in compressed domain, in particular MPEG-1 and MPEG-2 videos. In enabling a summarization algorithm through high-level semantic content, we try to address two major problems. First, we present the facility provided in the DVA system that allows the semi-automatic creation of this metadata. Second, we address the main point of this system which is the utilization of this metadata to filter out frames, creating an abstract of a video summary quality survey indicates that the proposed method performs satisfactorily.",
"This paper proposes a novel approach and a new benchmark for video summarization. Thereby we focus on user videos, which are raw videos containing a set of interesting events. Our method starts by segmenting the video by using a novel “superframe” segmentation, tailored to raw videos. Then, we estimate visual interestingness per superframe using a set of low-, mid- and high-level features. Based on this scoring, we select an optimal subset of superframes to create an informative and interesting summary. The introduced benchmark comes with multiple human created summaries, which were acquired in a controlled psychological experiment. This data paves the way to evaluate summarization methods objectively and to get new insights in video summarization. When evaluating our method, we find that it generates high-quality results, comparable to manual, human-created summaries.",
"While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end. Fast forwarding (i.e. frame sampling) is a natural choice for faster video browsing. However, this accentuates the shake caused by natural head motion, making the fast forwarded video useless.",
"In video summarization, a short video clip is made from lengthy video without losing its semantic content using significant scenes containing important frames, called keyframes. This process finds importance in video content management systems. The proposed method involves automatic summarization of motion picture based on human face. In this method, those frames within which the appearances of an actor or actress, selected by the user, occurs are treated as keyframes. In the first step, the video is segmented into shots by Mutual Information. Then it detects the available faces in the frames of each shot using the local Successive Mean Quantization Transform (SMQT) features and Sparse Network of Winnows (SNoW) classifier. Then the face of an actor of interest is selected to match with different available faces, already extracted, using Eigenfaces method. A shot is taken into consideration, if the method succeeds in finding at least one matched face in the shot. The selected shots are finally combined to create summarized video.",
"We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate subshots in scenes for summarization."
]
} |
1708.04146 | 2518952733 | The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method. | More recent methods have adopted the optimization of frame selection @cite_3 , @cite_13 , @cite_17 . @cite_13 focus on an adaptive frame selection based on minimizing an energy function. They modeled the video as a graph by mapping the frames as the nodes and the edges weight reflecting the cost of the transition between the frames in the final video. The shortest path in the graph produces the best frames transitions for the final video composition. In the work of @cite_3 they present a more sophisticated algorithm which optimally selects frames from the input video as result of a joint optimization of camera motion smoothing and speed-up. They also perform a 2D video stabilization to create the hyperlapse result. @cite_17 extended the work of expanding the field of view of the output video by using a mosaicking approach on the input frames with single or multiple egocentric videos. | {
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_17"
],
"mid": [
"1948812921",
"2003553461",
"2342525972"
],
"abstract": [
"While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end. Fast forwarding (i.e. frame sampling) is a natural choice for faster video browsing. However, this accentuates the shake caused by natural head motion, making the fast forwarded video useless.",
"Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.",
"The possibility of sharing one's point of view makes use of wearable cameras compelling. These videos are often long, boring and coupled with extreme shake as the camera is worn on a moving person. Fast forwarding (i.e. frame sampling) is a natural choice for faster video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives more stable, fast forwarded, hyperlapse videos. Adaptive frame sampling is formulated as energy minimization, whose optimal solution can be found in polynomial time. We further turn the camera shake from a drawback into a feature, enabling the increase of the field-of-view. This is obtained when each output frame is mosaiced from several input frames. Stitching multiple frames also enables the generation of a single hyperlapse video from multiple egocentric videos, allowing even faster video consumption."
]
} |
1708.04146 | 2518952733 | The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method. | Although the output videos of the aforementioned methods are appreciable, they are limited by the lack of considering the existence of scenes with different relevance for the recorder. In our previous work @cite_8 we addressed this issue by slicing the video into semantic and non-semantic segments and, based on the length of the segments we control the playback speed of each type of segment. In order to decrease the shakiness still present in the output videos of @cite_8 , which is caused by the increase of the playback speed in non-semantic segments, we propose in this work an egocentric video stabilizer which uses information from the original video. We also improve their slicing strategy to accurately define the semantic regions. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2515191187"
],
"abstract": [
"Thanks to the low operational cost and large storage capacity of smartphones and wearable devices, people are recording many hours of daily activities, sport actions and home videos. These videos, also known as egocentric videos, are generally long-running streams with unedited content, which make them boring and visually unpalatable, bringing up the challenge to make egocentric videos more appealing. In this work we propose a novel methodology to compose the new fast-forward video by selecting frames based on semantic information extracted from images. The experiments show that our approach outperforms the state-of-the-art as far as semantic information is concerned and that it is also able to produce videos that are more pleasant to be watched."
]
} |
1708.04146 | 2518952733 | The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method. | Despite the large number of proposed methods for video stabilization, they do not present good results for egocentric videos @cite_10 , @cite_13 . One example is the work of @cite_5 . In their work the input video is segmented in patches with @math length and then a single homography matrix is applied to all frames belonging to a given patch. The @math value utilized was @math or @math -seconds, which represents, for example, around a half of a minute in a @math fast-forward video. In this interval it is unlikely that all frames within a same patch are picturing the same scene, therefore it is impractical to find a homography consistency on them. | {
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_10"
],
"mid": [
"1948812921",
"2027648882",
""
],
"abstract": [
"While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end. Fast forwarding (i.e. frame sampling) is a natural choice for faster video browsing. However, this accentuates the shake caused by natural head motion, making the fast forwarded video useless.",
"Videos recorded on moving cameras are often known to be shaky due to unstable carrier motion and the video stabilization problem involves inferring the intended smooth motion to keep and the unintended shaky motion to remove. However, conventional methods typically require proper, scenario-specific parameter setting, which does not generalize well across different scenarios. Moreover, we observe that a stable video should satisfy two conditions: a smooth trajectory and consistent inter-frame transition. While conventional methods only target at the former condition, we address these two issues at the same time. In this paper, we propose a homography consistency based algorithm to directly extract the optimal smooth trajectory and evenly distribute the inter-frame transition. By optimizing in the homography domain, our method does not need further matrix decomposition and parameter adjustment, automatically adapting to all possible types of motion (eg. translational or rotational) and video properties (eg. frame rates). We test our algorithm on translational videos recorded from a car and rotational videos from a hovering aerial vehicle, both of high and low frame rates. Results show our method widely applicable to different scenarios without any need of additional parameter adjustment.",
""
]
} |
1708.03699 | 2748650022 | Experimenting with a dataset of approximately 1.6M user comments from a Greek news sports portal, we explore how a state of the art RNN-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observe improvements in all cases, with user embeddings leading to the biggest performance gains. | In previous work @cite_6 , we showed that our -based method outperforms @cite_7 , the previous state of the art in user content moderation. uses character or word @math -gram features, no user-specific information, and an or classifier. Other related work on abusive content moderation was reviewed extensively in our previous work @cite_6 . Here we focus on previous work that considered user-specific features and user embeddings. | {
"cite_N": [
"@cite_7",
"@cite_6"
],
"mid": [
"2949089361",
"2616926666"
],
"abstract": [
"The damage personal attacks cause to online discourse motivates many platforms to try to curb the phenomenon. However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult. The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale. We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate. We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers, as measured by the area under the ROC curve and Spearman correlation. Using this corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks. This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions from unregistered users.",
"Experimenting with a new dataset of 1.6M user comments from a Greek news portal and existing datasets of English Wikipedia comments, we show that an RNN outperforms the previous state of the art in moderation. A deep, classification-specific attention mechanism improves further the overall performance of the RNN. We also compare against a CNN and a word-list baseline, considering both fully automatic and semi-automatic moderation."
]
} |
1708.03699 | 2748650022 | Experimenting with a dataset of approximately 1.6M user comments from a Greek news sports portal, we explore how a state of the art RNN-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observe improvements in all cases, with user embeddings leading to the biggest performance gains. | detect sarcasm in tweets. Their best system uses a word-based Convolutional Neural Network ( ). The feature vector produced by the (representing the content of the tweet) is concatenated with the user embedding of the author, and passed on to an that classifies the tweet as sarcastic or not. This method outperforms a previous state of the art sarcasm detection method @cite_4 that relies on an classifier with hand-crafted content and user-specific features. We use an instead of a , and we feed the comment and user embeddings to a simpler layer (Eq. ), instead of an . discard unknown users, unlike our experiments, and consider only sarcasm, whereas moderation also involves profanity, hate speech, bullying, threats etc. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2263859238"
],
"abstract": [
"Sarcasm requires some shared knowledge between speaker and audience; it is a profoundly contextual phenomenon. Most computational approaches to sarcasm detection, however, treat it as a purely linguistic matter, using information such as lexical cues and their corresponding sentiment as predictive features. We show that by including extra-linguistic information from the context of an utterance on Twitter — such as properties of the author, the audience and the immediate communicative environment — we are able to achieve gains in accuracy compared to purely linguistic features in the detection of this complex phenomenon, while also shedding light on features of interpersonal interaction that enable sarcasm in conversation."
]
} |
1708.03655 | 2745388327 | Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16 increase in accuracy with a 62 decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines. | Humans use many non-verbal cues to communicate motion intent. There have been some successes at approximating these cues in humanoid robots, such as with gestures @cite_17 and gaze @cite_21 , including via robot anthropomorphism @cite_19 . However, often robots lack the faculty or subtlety to physically reproduce human non-verbal cues---especially robots that are not of human form. One alternative is to use animation and animated storytelling techniques, such as forming suggestive poses or generating initial movements @cite_9 . This increases legibility: the ability to infer the robot's goal through its directed motion @cite_30 . However, these methods still lack the ability to transparently communicate complex paths and motions. Further, tasks involving close proximity teamwork may require more detailed knowledge of how the robot will act both before and during the motion, such as in collaborative furniture assembly @cite_3 and co-located teleoperation @cite_32 . | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_19",
"@cite_17"
],
"mid": [
"1992154343",
"2016721335",
"1983423649",
"",
"1978684252",
"1995782858",
"55328052"
],
"abstract": [
"A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robot's motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalism's prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion.",
"The animation techniques of anticipation and reaction can help create robot behaviors that are human readable such that people can figure out what the robot is doing, reasonably predict what the robot will do next, and ultimately interact with the robot in an effective way. By showing forethought before action and expressing a reaction to the task outcome (success or failure), we prototyped a set of human-robot interaction behaviors. In a 2 (forethought vs. none: between) x 2 (reaction to outcome vs. none: between) x 2 (success vs. failure task outcome: within) experiment, we tested the influences of forethought and reaction upon people's perceptions of the robot and the robot's readability. In this online video prototype experiment (N=273), we have found support for the hypothesis that perceptions of robots are influenced by robots showing forethought, the task outcome (success or failure), and showing goal-oriented reactions to those task outcomes. Implications for theory and design are discussed.",
"Human communication involves a number of nonverbal cues that are seemingly unintentional, unconscious, and automatic-both in their production and perception-and convey rich information on the emotional state and intentions of an individual. One family of such cues is called \"nonverbal leakage.\" In this paper, we explore whether people can read nonverbal leakage cues-particularly gaze cues-in humanlike robots and make inferences on robots' intentions, and whether the physical design of the robot affects these inferences. We designed a gaze cue for Geminoid-a highly humanlike android-and Robovie-a robot with stylized, abstract humanlike features-that allowed the robots to \"leak\" information on what they might have in mind. In a controlled laboratory experiment, we asked participants to play a game of guessing with either of the robots and evaluated how the gaze cue affected participants' task performance. We found that the gaze cue did, in fact, lead to better performance, from which we infer that the cue led to attributions of mental states and intentionality. Our results have implications for robot design, particularly for designing expression of intentionality, and for our understanding of how people respond to human social cues when they are enacted by robots.",
"",
"",
"When humans and mobile robots share the same space, one of their challenges is to navigate around each other and manage their mutual navigational intents. While humans have developed excellent skills in inferring their counterpart's intentions via a number of implicit and non-verbal cues, making navigation also in crowds an ease, this kind of effective and efficient communication often falls short in human-robot encounters. In this paper, two alternative approaches to convey navigational intent of a mobile robot to humans in a shared environment are proposed and analysed. The first is utilising anthropomorphic features of the mobile robot to realise an implicit joint attention using gaze to represent the direction of navigational intent. In the second approach, a more technical design adopting the semantics of car's turn indicators, has been implemented. The paper compares both approaches with each other and against a control behaviour without any communication of intent. Both approaches show statistically significant differences in comparison to the control behaviour. However, the second approach using indicators has shown as being more effective in conveying the intent and also has a higher positive impact on the comfort of the humans encountering the robot.",
"A framework and methodology to realize robot-to-human behavioral expression is proposed. Human-robot symbiosis requires to enhance nonverbal communication between humans and robots. The proposed methodology is based on movement analysis theories of dance psychology researchers, namely Laban, Lamb and Kestenberg. Two experiments on robot-to-human behavioral expression are also presented to support the methodology. One is an experiment to produce familiarity with robot-to-human tactile reaction. The other is an experiment to express a robot's emotions by its dances. This methodology will be a key to realize robots that work close to humans cooperatively."
]
} |
1708.03655 | 2745388327 | Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16 increase in accuracy with a 62 decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines. | We can adapt the real-world environment around the human-robot collaboration to help indicate robot intent. One way is to combine light projectors with object tracking software and virtual graphics to build a general-purpose augmented environment. This has been used to convey a shared work space, robot navigational intention, and safety information @cite_0 @cite_35 @cite_29 . However, building special purpose environments is time consuming and expensive, with limitations due to the occlusion of light from objects in the environment, on the number of people able to see perspective-correct graphics, and with a requirement for controlled lighting conditions. | {
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_29"
],
"mid": [
"",
"2586209707",
"2555647367"
],
"abstract": [
"",
"In this paper, we present SPRinT (Smart Phone Pad and Robot for Tele-operation and Tele-presence), a tele-presence robot for HRI controlled by a smart phone. It uses the projection display as a main communication channel between the local and remote users and we describe the interaction model for such an HRI scenario. We focus on the interaction model and interplay among the robot controlling (local) user, tele-presence robot, and the counterpart person in the remote environment the local user attempting to interact with using the projection display. We outline the technical and interface requirements for understanding the target remote space, and initiating and setting up such an information exchange for the local user (and partly for the remote).",
"Trained human co-workers can often easily predict each other's intentions based on prior experience. When collaborating with a robot coworker, however, intentions are hard or impossible to infer. This difficulty of mental introspection makes human-robot collaboration challenging and can lead to dangerous misunderstandings. In this paper, we present a novel, object-aware projection technique that allows robots to visualize task information and intentions on physical objects in the environment. The approach uses modern object tracking methods in order to display information at specific spatial locations taking into account the pose and shape of surrounding objects. As a result, a human co-worker can be informed in a timely manner about the safety of the workspace, the site of next robot manipulation tasks, and next subtasks to perform. A preliminary usability study compares the approach to collaboration approaches based on monitors and printed text. The study indicates that, on average, the user effectiveness and satisfaction is higher with the projection based approach."
]
} |
1708.03655 | 2745388327 | Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16 increase in accuracy with a 62 decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines. | As HoloLens and its contemporaries are relatively new as pieces of integrated technology, there is little direct evidence to support the speculated that optical HMDs will provide natural robot intent communication. However, hypotheses may be informed from literature in the parallel technology of virtual reality (VR) which, in a similar way to mixed reality (MR), provides head tracked stereo display of 3D graphics to create immersion. In VR, 3D spatial reasoning gains have been tested @cite_15 . found that head-tracked displays outperform stationary displays for a visual search task @cite_22 . Ware and Franck find a head-tracked stereo display 3 @math less erroneous than a 2D display for visually assessing graph connectivity @cite_6 . measured performance gains in Tri-D chess for first-person perspective VR HMDs over third-person perspective 2D displays (like RViz) @cite_36 . found navigation through a 3D virtual building was faster using HMDs over 2D displays, though with no accuracy increase @cite_39 . | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_6",
"@cite_39",
"@cite_15"
],
"mid": [
"2114117853",
"2571941414",
"2158418569",
"2128832650",
"2562199918"
],
"abstract": [
"Head-mounted displays, as popularized by virtual reality systems, offer the opportunity to immerse a user in a synthetically generated environment. While there is much anecdotal evidence that this is a qualitative jump in the user interface, there is little quantitative data to establish that emersion improves task performance. The authors present the results of a user study: users performing a generic search task decrease task performance time by roughly half (42 reduction) when they change from a stationary display to a head-mounted display with identical properties (resolution, field-of-view, etc.). A second result is that users who practice with the head-mounted display reduce task completion time by 23 in later trials with the stationary display, suggesting a transfer effect. >",
"This paper describes an experiment to assess the influence of immersion on performance in immersive virtual environments. The task involved Tri-Dimensional Chess, and required subjects to reproduce on a real chess board the state of board learned from a sequence of moves witnessed in a virtual environment. Twenty four subjects were allocated to a factorial design consisting of two levels of immersion (exocentric screen based, and egocentric HMD based), and two kinds of environment (plain and realistic). The results suggest that egocentric subjects performed better than exocentric, and those in the more realistic environment performed better than those in the less realistic environment. Previous knowledge of chess, and amount of virtual practice were also significant, and may be considered as control variables to equalise these factors amongst the subjects. Other things being equal, males remembered the moves better than females, although female performance improved with higher spatial ability test score. The paper also attempts to clarify the relationship between immersion, presence and performance, and locates the experiment within such a theoretical framework.",
"An experiment is reported which tests whether network information is more effectively displayed in a three dimensional space than in a two dimensional space. The experimental task is to trace a path in a network and the experiment is carried out in 2D, in a 3D stereo view, in a 2D view with head coupled perspective, and in a 3D stereo view with head coupled perspective; this last condition creates a localized virtual reality display. The results show that the motion parallax obtained from the head coupling of perspective is more important than stereopsis in revealing structural information. Overall the results show that three times as much information can be perceived in the head coupled stereo view as in the 2D view. >",
"Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were “virtual buildings” containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included “looking around” more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel.",
""
]
} |
1708.03655 | 2745388327 | Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer's real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16 increase in accuracy with a 62 decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines. | Not all experiments in this area favor large-format VR. Many prior works compare immersive head-tracked CAVE displays against desktop and fishtank VR' displays, and often smaller higher-resolution displays induce greater performance thanks to faster visual scanning @cite_27 @cite_33 . Sousa reviewed all HMD to 2D display comparisons in the literature until 2009, and found their results broadly conflicting. Then, they conducted their own comparison for 3D navigation: on average, the desktop setup was better than the VR HMDs @cite_18 . | {
"cite_N": [
"@cite_27",
"@cite_18",
"@cite_33"
],
"mid": [
"2123089128",
"2154868891",
""
],
"abstract": [
"This article summarizes a user study of viewing 3D geometry on large-screen display devices. The geometry models the structure of a complex physical object. Our results show that the crispness of a display device (intraframe performance) must be considered along with the speed at which new frames can be computed (interframe performance). It's important to consider crispness from the user's perspective, using values that aren't often published in device specifications. Equally important is the framework for different types of 3D data and the categorization of display technology and techniques.",
"Virtual Reality (VR) has been constantly evolving since its early days, and is now a fundamental technology in different application areas. User evaluation is a crucial step in the design and development of VR systems that do respond to users' needs, as well as for identifying applications that indeed gain from the use of such technology. Yet, there is not much work reported concerning usability evaluation and validation of VR systems, when compared with the traditional desktop setup. The paper presents a user study performed, as a first step, for the evaluation of a low-cost VR system using a Head-Mounted Display (HMD). That system was compared to a traditional desktop setup through an experiment that assessed user performance, when carrying out navigation tasks in a game scenario for a short period. The results show that, although users were generally satisfied with the VR system, and found the HMD interaction intuitive and natural, most performed better with the desktop setup.",
""
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | Over the years, various methods have been used for wireless sensor networks data representation and management ( @cite_3 @cite_22 ). Our approach is different basically by the aspects of big data, graph database storage, and our unique graph data model. | {
"cite_N": [
"@cite_22",
"@cite_3"
],
"mid": [
"2133151613",
"97559023"
],
"abstract": [
"Recent technological advances enable realization of wireless sensor networks suitable for many applications including environmental monitoring, biological and chemical contamination detection, earthquake management. Depending on the application, the properties of sensor network databases include a distributed environment, approximate and long-running queries, large volumes of data, uncertainty and fuzziness in data and in queries, activeness, etc., which bring out new database processing and management problems. In particular, reactive sensor network database applications must be able to detect the occurrences of specific events or changes in the network state, and to respond by automatically executing the appropriate application logic. For such applications, an ECA rule-based fuzzy active database approach fits well. In this study, we propose an active database approach employing a fuzzy Petri net model, which processes uncertain sensor data and handles flexible (i.e., fuzzy and approximate) and continuous queries. The proposed model provides a natural data processing and management approach for sensor and actuator network applications that require reactive behavior.",
"In the recent past, search in sensor systems focused on node hardware constraints and very limited energy resources. But nowadays, that new applications need data processing with temporal constraints in their tasks; then one of the new challenges faced by wireless sensor networks (WSNs) is handling real-time storage and querying the data they process. Two main approaches to storage and querying data are generally considered warehousing and distributed. The warehousing approach stores data in a central database and then queries may be performed to it. In a distributed approach, sensor devices are considered as local databases and data are managed locally. The data collected by sensors must represent the current state of the environment; for this reason they are subject to logic and time constraints. Then, this paper identifies the main specifications of real-time data management and presents the available real-time data management solutions for WSNs, in order to discuss them and identify some open issues and provide guidelines for further contributions."
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | @cite_27 discuss big data with spatial data received from wireless sensors using real life scenarios. One of the scenarios is related to smart cities which is similar to surveillance domain. They propose a scalable solution using Hadoop and HBase NoSQL database to prototype a platform for storage and processing wireless data. We also design and implement a simulation prototype from storage layer to analytics layer and more importantly, we propose a grah based data model on top of that architecture. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2100937240"
],
"abstract": [
"In this article we demonstrate that spatial big data can play a key role in many emerging wireless networking applications. We also argue that spatial and spatiotemporal problems have their own very distinct role in the big data context compared to the commonly considered relational problems. We describe three major application scenarios for spatial big data, each imposing specific design and research challenges. We then present our work on developing highly scalable parallel processing frameworks for spatial data in the Hadoop framework using the MapReduce computational model. Our results show that using Hadoop enables highly scalable implementations of algorithms for common spatial data processing problems. However, development of these implementations requires significant specialized knowledge, demonstrating the need for development of more user-friendly alternatives."
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | @cite_11 present a survey paper on graph database models. They compare graph database models with the other database models, i.e. a relational model. In this paper, we also compare well-known graph database models with the relational database model. Furthmore we perform queries to benchmark the performance of databases. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2114507260"
],
"abstract": [
"Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints."
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | Another survey paper is a written by Felemban @cite_21 which is about border surveillance. His research enlists the literature for experimenting work done in border surveillance and intrusion detection using the technology of WSN. Our research differs from the existing works by employing a graph based approach for surveillance domain and focusing on the simulation of the big data. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2064909103"
],
"abstract": [
"Wireless Sensor Network (WSN) has been emerging in the last decade as a powerful tool for connecting physical and digital world. WSN has been used in many applications such habitat monitoring, building monitoring, smart grid and pipeline monitoring. In addition, few researchers have been experimenting with WSN in many mission-critical applications such as military applications. This paper surveys the literature for experimenting work done in border surveillance and intrusion detection using the technology of WSN. The potential benefits of using WSN in border surveillance are huge; however, up to our knowledge very few attempts of solving many critical issues about this application could be found in the literature."
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | PipeNet @cite_17 is a multi-layered wireless sensor networks application focused on pipeline monitoring. System aims to detect the leaks and other anomalies in water pipelines. They have used various types of sensors like pressure, pH and ultrasonic sensors on top of Intel Moto platform. Our sensor nodes are built on Rasberry Pi platform and have seismic, acoustic, and PIR sensors but also a multimedia camera. A camera needs further analysis like image processing and feature extraction. Their multi-layer architecture is similar to our prototype but in another domain with different sensors and different analytical approaches. They try to analyze the collected multi-modal data for detection of the leaks. But we try to identify objects and track their movement. On the other hand, we approach our sensor data as big data. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2120117050"
],
"abstract": [
"US water utilities are faced with mounting operational and maintenance costs as a result of aging pipeline infrastructures. Leaks and ruptures in water supply pipelines and blockages and overflow events in sewer collectors cost millions of dollars a year, and monitoring and repairing this underground infrastructure presents a severe challenge. In this paper, we discuss how wireless sensor networks (WSNs) can increase the spatial and temporal resolution of operational data from pipeline infrastructures and thus address the challenge of near real-time monitoring and eventually control. We focus on the use of WSNs for monitoring large diameter bulk-water transmission pipelines. We outline a system, PipeNet, we have been developing for collecting hydraulic and acoustic vibration data at high sampling rates as well as algorithms for analyzing this data to detect and locate leaks. Challenges include sampling at high data rates, maintaining aggressive duty cycles, and ensuring tightly time- synchronized data collection, all under a strict power budget. We have carried out an extensive field trial with Boston Water and Sewer Commission in order to evaluate some of the critical components of PipeNet. Along with the results of this preliminary trial, we describe the results of extensive laboratory experiments which are used to evaluate our analysis and data processing solutions. Our prototype deployment has led to the development of a reusable, field-reprogrammable software infrastructure for distributed high-rate signal processing in wireless sensor networks, which we also describe."
]
} |
1708.03878 | 2748079855 | Abstract Sensors are present in various forms all around the world such as mobile phones, surveillance cameras, smart televisions, intelligent refrigerators and blood pressure monitors. Usually, most of the sensors are a part of some other system with similar sensors that compose a network. One of such networks is composed of millions of sensors connected to the Internet which is called Internet of Things (IoT). With the advances in wireless communication technologies, multimedia sensors and their networks are expected to be major components in IoT. Many studies have already been done on wireless multimedia sensor networks in diverse domains like fire detection, city surveillance, early warning systems, etc. All those applications position sensor nodes and collect their data for a long time period with real-time data flow, which is considered as big data. Big data may be structured or unstructured and needs to be stored for further processing and analyzing. Analyzing multimedia big data is a challenging task requiring a high-level modeling to efficiently extract valuable information knowledge from data. In this study, we propose a big database model based on graph database model for handling data generated by wireless multimedia sensor networks. We introduce a simulator to generate synthetic data and store and query big data using graph model as a big database. For this purpose, we evaluate the well-known graph-based NoSQL databases, Neo4j and OrientDB, and a relational database, MySQL. We have run a number of query experiments on our implemented simulator to show that which database system(s) for surveillance in wireless multimedia sensor networks is efficient and scalable. | Suvendu @cite_25 propose an analytic architecture for big data to detect intruders using camera sensors. Our work differs by using additional scalar sensors like acoustic and seismic sensors but also proposed graph based data model. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2301250350"
],
"abstract": [
"Barrier coverage in Wireless Sensor Networks (WSNs) is an important research issue as intruder detection is the main purpose of deploying wireless sensors over a specified monitoring region. In WSNs, excessive volume and variety of sensor data are generated, which need to be analyzed for accurate measurement of the image in terms of width and resolution. In this paper, a three layered big data analytic architecture is designed to analyze the data generated during the construction of the barrier and detection of the intruder using camera sensors. Besides, a cloud layer is designed for storing the analyzed data to study the behavior of the intruder. In order to minimize the number of camera sensors for constructing the barrier, algorithms are designed to construct the single barrier with limited node mobility and the barrier path Quality of Sensing (QoS) is maintained with a minimum number of camera sensors. Simulation results show that our algorithms can construct 100 of the barrier with fewer number of camera sensors and average data processing time can be reduced by using parallel servers even if for larger size of data."
]
} |
1708.03615 | 2745877799 | We present a novel unsupervised method for face identity learning from video sequences. The method exploits the ResNet deep network for face detection and VGGface fc7 face descriptors together with a smart learning mechanism that exploits the temporal coherence of visual data in video streams. We present a novel feature matching solution based on Reverse Nearest Neighbour and a feature forgetting strategy that supports incremental learning with memory size control, while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively applied to relevant applications like multiple face tracking. | One key point of the method is the exploitation of video temporal coherence as a form of weak supervision. This idea was suggested by @cite_11 , but was essentially applied with some success to predict future frames with unsupervised feature learning, @cite_14 among the most notable experiments. | {
"cite_N": [
"@cite_14",
"@cite_11"
],
"mid": [
"2951242004",
"219040644"
],
"abstract": [
"Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
]
} |
1708.03615 | 2745877799 | We present a novel unsupervised method for face identity learning from video sequences. The method exploits the ResNet deep network for face detection and VGGface fc7 face descriptors together with a smart learning mechanism that exploits the temporal coherence of visual data in video streams. We present a novel feature matching solution based on Reverse Nearest Neighbour and a feature forgetting strategy that supports incremental learning with memory size control, while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively applied to relevant applications like multiple face tracking. | Inclusion of a memory mechanisms in learning is another key feature of our approach. Works on parameters re-learning on domains that have some temporal coherence, have used reinforcement learning, @cite_6 @cite_1 among the most recent ones. They typically store the past experience in a replay memory with some priority and sample mini-batches for training. This makes it possible to break the temporal correlations by mixing more and less recent experiences. More recently, Neural Turing Machine architectures have been proposed in @cite_7 and @cite_23 that implement an augmented memory to quickly encode and retrieve new information. These architectures have the ability to rapidly bind never-before-seen information after a single presentation via an external memory module. However, in these cases, training data are still provided supervisedly and the methods don't scale with massive video streams. | {
"cite_N": [
"@cite_23",
"@cite_1",
"@cite_7",
"@cite_6"
],
"mid": [
"2950527759",
"2201581102",
"2399033357",
"1757796397"
],
"abstract": [
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",
"Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them."
]
} |
1708.03795 | 2746494769 | Object detection is an important yet challenging task in video understanding & analysis, where one major challenge lies in the proper balance between two contradictive factors: detection accuracy and detection speed. In this paper, we propose a new adaptive patch-of-interest composition approach for boosting both the accuracy and speed for object detection. The proposed approach first extracts patches in a video frame which have the potential to include objects-of-interest. Then, an adaptive composition process is introduced to compose the extracted patches into an optimal number of sub-frames for object detection. With this process, we are able to maintain the resolution of the original frame during object detection (for guaranteeing the accuracy), while minimizing the number of inputs in detection (for boosting the speed). Experimental results on various datasets demonstrate the effectiveness of the proposed approach. | Most researchers focus their researches on improving the detection accuracy. Early works try to find proper hand-crafted features in order to improve the accuracy, such as DPM @cite_6 , HOG @cite_15 and CENTRIST @cite_3 . The performances for these methods are oftenrestrained since hand-crafted features have limitations in effectivelycapturing the complex characteristics of objects.With the advances in deep convolutional networks (ConvNets),ConvNet-based detec-tion methods have shown big improvements on detection accuracy and have become the mainstream approaches for object detection @cite_10 - @cite_11 , @cite_4 . However, many ConvNet-based approaches have high computation complexity, which obviously limits their applications. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_6",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"1934184906",
"2113855951",
"",
"2008824967",
"2160921898",
"2952677200"
],
"abstract": [
"In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.",
"CENsus TRansform hISTogram (CENTRIST), a new visual descriptor for recognizing topological places or scene categories, is introduced in this paper. We show that place and scene recognition, especially for indoor environments, require its visual descriptor to possess properties that are different from other vision domains (e.g., object recognition). CENTRIST satisfies these properties and suits the place and scene recognition task. It is a holistic representation and has strong generalizability for category recognition. CENTRIST mainly encodes the structural properties within an image and suppresses detailed textural information. Our experiments demonstrate that CENTRIST outperforms the current state of the art in several place and scene recognition data sets, compared with other descriptors such as SIFT and Gist. Besides, it is easy to implement and evaluates extremely fast.",
"",
"In this paper, we propose an effective method to recognize human actions from sequences of depth maps, which provide additional body shape and motion information for action recognition. In our approach, we project depth maps onto three orthogonal planes and accumulate global activities through entire video sequences to generate the Depth Motion Maps (DMM). Histograms of Oriented Gradients (HOG) are then computed from DMM as the representation of an action video. The recognition results on Microsoft Research (MSR) Action3D dataset show that our approach significantly outperforms the state-of-the-art methods, although our representation is much more compact. In addition, we investigate how many frames are required in our framework to recognize actions on the MSR Action3D dataset. We observe that a short sub-sequence of 30-35 frames is sufficient to achieve comparable results to that operating on entire video sequences.",
"In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems.",
"In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN girshick2014rich , which was the state-of-the-art, from 31 to 50.3 on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 . Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline."
]
} |
1708.03797 | 2745659091 | Matrix factorization has now become a dominant solution for personalized recommendation on the Social Web. To alleviate the cold start problem, previous approaches have incorporated various additional sources of information into traditional matrix factorization models. These upgraded models, however, achieve only "marginal" enhancements on the performance of personalized recommendation. Therefore, inspired by the recent development of deep-semantic modeling, we propose a hybrid deep-semantic matrix factorization (HDMF) model to further improve the performance of tag-aware personalized recommendation by integrating the techniques of deep-semantic modeling, hybrid learning, and matrix factorization. Experimental results show that HDMF significantly outperforms the state-of-the-art baselines in tag-aware personalized recommendation, in terms of all evaluation metrics, e.g., its mean reciprocal rank (resp., mean average precision) is 1.52 (resp., 1.66) times as high as that of the best baseline. | Many systems have been proposed for tag-aware personalized recommendation on the Social Web. Content-based systems @cite_13 @cite_0 aim at recommending items that are similar to those that a user liked previously, where the similarity is usually measured by the cosine similarity between user and item profiles in the tag space. Collaborative systems recommend users with items liked by similar users using machine learning techniques, such as nearest neighbor modeling @cite_12 and matrix factorization @cite_17 . | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2064173066",
"2018426558",
"2025605741",
"2033342008"
],
"abstract": [
"Collaborative tagging applications allow Internet users to annotate resources with personalized tags. The complex network created by many annotations, often called a folksonomy, permits users the freedom to explore tags, resources or even other user's profiles unbound from a rigid predefined conceptual hierarchy. However, the freedom afforded users comes at a cost: an uncontrolled vocabulary can result in tag redundancy and ambiguity hindering navigation. Data mining techniques, such as clustering, provide a means to remedy these problems by identifying trends and reducing noise. Tag clusters can also be used as the basis for effective personalized recommendation assisting users in navigation. We present a personalization algorithm for recommendation in folksonomies which relies on hierarchical tag clusters. Our basic recommendation framework is independent of the clustering method, but we use a context-dependent variant of hierarchical agglomerative clustering which takes into account the user's current navigation context in cluster selection. We present extensive experimental results on two real world dataset. While the personalization algorithm is successful in both cases, our results suggest that folksonomies encompassing only one topic domain, rather than many topics, present an easier target for recommendation, perhaps because they are more focused and often less sparse. Furthermore, context dependent cluster selection, an integral step in our personalization algorithm, demonstrates more utility for recommendation in multi-topic folksonomies than in single-topic folksonomies. This observation suggests that topic selection is an important strategy for recommendation in multi-topic folksonomies.",
"We present and evaluate various content-based recommendation models that make use of user and item profiles defined in terms of weighted lists of social tags. The studied approaches are adaptations of the Vector Space and Okapi BM25 information retrieval models. We empirically compare the recommenders using two datasets obtained from Delicious and Last.fm social systems, in order to analyse the performance of the approaches in scenarios with different domains and tagging behaviours.",
"Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.",
"In this paper, we present a contribution to IR modeling. We propose an approach that computes on the fly, a Personalized Social Document Representation (PSDR) of each document per user based on his social activities. The PSDRs are used to rank documents with respect to a query. This approach has been intensively evaluated on a large public dataset, showing significant benefits for personalized search."
]
} |
1708.03797 | 2745659091 | Matrix factorization has now become a dominant solution for personalized recommendation on the Social Web. To alleviate the cold start problem, previous approaches have incorporated various additional sources of information into traditional matrix factorization models. These upgraded models, however, achieve only "marginal" enhancements on the performance of personalized recommendation. Therefore, inspired by the recent development of deep-semantic modeling, we propose a hybrid deep-semantic matrix factorization (HDMF) model to further improve the performance of tag-aware personalized recommendation by integrating the techniques of deep-semantic modeling, hybrid learning, and matrix factorization. Experimental results show that HDMF significantly outperforms the state-of-the-art baselines in tag-aware personalized recommendation, in terms of all evaluation metrics, e.g., its mean reciprocal rank (resp., mean average precision) is 1.52 (resp., 1.66) times as high as that of the best baseline. | Due to uncontrolled vocabularies, social tags are usually redundant, sparse, and ambiguous. A solution to this problem is to apply clustering in the tag space @cite_0 , such that redundant tags are aggregated; this also reduces ambiguities, since tags in the same cluster share the same meaning. But tag clustering is usually time-consuming in practice, so another solution is to use autoencoders @cite_7 , due to their capability to extract abstract representations @cite_6 . As they are the state-of-the-art solutions for the same problem, these two methods are used as baselines in our evaluation. | {
"cite_N": [
"@cite_0",
"@cite_6",
"@cite_7"
],
"mid": [
"2064173066",
"",
"2315503695"
],
"abstract": [
"Collaborative tagging applications allow Internet users to annotate resources with personalized tags. The complex network created by many annotations, often called a folksonomy, permits users the freedom to explore tags, resources or even other user's profiles unbound from a rigid predefined conceptual hierarchy. However, the freedom afforded users comes at a cost: an uncontrolled vocabulary can result in tag redundancy and ambiguity hindering navigation. Data mining techniques, such as clustering, provide a means to remedy these problems by identifying trends and reducing noise. Tag clusters can also be used as the basis for effective personalized recommendation assisting users in navigation. We present a personalization algorithm for recommendation in folksonomies which relies on hierarchical tag clusters. Our basic recommendation framework is independent of the clustering method, but we use a context-dependent variant of hierarchical agglomerative clustering which takes into account the user's current navigation context in cluster selection. We present extensive experimental results on two real world dataset. While the personalization algorithm is successful in both cases, our results suggest that folksonomies encompassing only one topic domain, rather than many topics, present an easier target for recommendation, perhaps because they are more focused and often less sparse. Furthermore, context dependent cluster selection, an integral step in our personalization algorithm, demonstrates more utility for recommendation in multi-topic folksonomies than in single-topic folksonomies. This observation suggests that topic selection is an important strategy for recommendation in multi-topic folksonomies.",
"",
"Many researchers have introduced tag information to recommender systems to improve the performance of traditional recommendation techniques. However, user-defined tags will usually suffer from many problems, such as sparsity, redundancy, and ambiguity. To address these problems, we propose a new recommendation algorithm based on deep neural networks. In the proposed algorithm, users' profiles are initially represented by tags and then a deep neural network model is used to extract the in-depth features from tag space layer by layer. In this way, representations of the raw data will become more abstract and advanced, and therefore the unique structure of tag space will be revealed automatically. Based on those extracted abstract features, users' profiles are updated and used for making recommendations. The experimental results demonstrate the usefulness of the proposed algorithm and show its superior performance over the clustering based recommendation algorithms. In addition, the impact of network depth on the algorithm performance is also investigated."
]
} |
1708.03669 | 2749422571 | Classifying pages or text lines into font categories aids transcription because single font Optical Character Recognition (OCR) is generally more accurate than omni-font OCR. We present a simple framework based on Convolutional Neural Networks (CNNs), where a CNN is trained to classify small patches of text into predefined font classes. To classify page or line images, we average the CNN predictions over densely extracted patches. We show that this method achieves state-of-the-art performance on a challenging dataset of 40 Arabic computer fonts with 98.8 line level accuracy. This same method also achieves the highest reported accuracy of 86.6 in predicting paleographic scribal script classes at the page level on medieval Latin manuscripts. Finally, we analyze what features are learned by the CNN on Latin manuscripts and find evidence that the CNN is learning both the defining morphological differences between scribal script classes as well as overfitting to class-correlated nuisance factors. We propose a novel form of data augmentation that improves robustness to text darkness, further increasing classification performance. | Zramdini and Ingold presented a font recognition system based on the statistics of connected components, achieving 97.35 posed font recognition as texture identification and used Gabor Filters to achieve 99.1 Dimension features were introduced in @cite_10 for Arabic font classification and resulted in 98 used log-Gabor filter features extracted at multiple scales and orientations to obtain 96.1 More recently, deep learning techniques based on Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have been proposed for font classification. used a combination of CNN and 2D RNN models to classify single Chinese characters into 7 font classes with 97.77 classified handwritten Chinese characters into 5 calligraphy classes with 95 calligraphy classes is similar to classifying script types in CLaMM, though CLaMM uses Latin script classes with page-level ground truth. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1991904384"
],
"abstract": [
"In this work, a new method is proposed to the widely neglected problem of Arabic font recognition, it uses global texture analysis. This method is based on fractal geometry, and the feature extraction does not depend on the document contents. In our method, we take the document as an image containing some specific textures and regard font recognition as texture identification. We have combined both techniques BCD (box counting dimension) and DCD (dilation counting dimension) to obtain the main features. The first expresses texture distribution in 2-D image. The second makes possible to take on the human vision system aspect, since it makes it possible to differentiate one font from another. Both features are expressed in a parametric form; then four features were kept. Experiments are carried out by using 1000 samples of 10 typefaces (each typeface is combined with four sizes). The average recognition rates are of about 96.2 using KNN (K nearest neighbor) and 98 using RBF (radial basic function). Experimental results are also included in the robustness of the method against written size, skew, image degradation (e.g., Gaussian noise) and resolution, and compared with the existing methods. The main advantages of our method are that (1) the dimension of feature vector is very low; (2) the variation sizes of the studied blocks (which are not standardized) are robust; (3) less samples are needed to train the classifier; (4) finally and the most important, is the first attempt to apply and adapt fractal dimensions to font recognition."
]
} |
1708.03686 | 2748405198 | The tasks of identifying separation structures and clusters in flow data are fundamental to flow visualization. Significant work has been devoted to these tasks in flow represented by vector fields, but there are unique challenges in addressing these tasks for time-varying particle data. The unstructured nature of particle data, nonuniform and sparse sampling, and the inability to access arbitrary particles in space-time make it difficult to define separation and clustering for particle data. We observe that weaker notions of separation and clustering through continuous measures of these structures are meaningful when coupled with user exploration. We achieve this goal by defining a measure of particle similarity between pairs of particles. More specifically, separation occurs when spatially-localized particles are dissimilar, while clustering is characterized by sets of particles that are similar to one another. To be robust to imperfections in sampling we use diffusion geometry to compute particle similarity. Diffusion geometry is parameterized by a scale that allows a user to explore separation and clustering in a continuous manner. We illustrate the benefits of our technique on a variety of 2D and 3D flow datasets, from particles integrated in fluid simulations based on time-varying vector fields, to particle-based simulations in astrophysics. | Our approach employs diffusion geometry, which relies on a notion of scale in constructing similarities between particle trajectories. This is related to scale-space approaches @cite_61 which analyze field-based data in multiple scales, typically to denoise or find optimal spatial scales for filtering. Flow analysis has employed scale-space techniques in various contexts, such as vortex tracking @cite_2 and detection of FTLE ridges @cite_59 . However, the construction of multi-scale distances requires different mathematical tools compared to traditional scale-space methods on fields. Furthermore, it is nontrivial to extend these techniques to particle data, as they utilize a vector field for their respective scale-space approaches. | {
"cite_N": [
"@cite_61",
"@cite_59",
"@cite_2"
],
"mid": [
"2112328181",
"",
"1585297471"
],
"abstract": [
"A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over ...",
"",
"Scale-space techniques have become popular in computer vision for their capability to access the multiscale information inherently contained in images. We show that the field of flow visualization can benefit from these techniques, too, yielding more coherent features and sorting out numerical artifacts as well as irrelevant large-scale features. We describe an implementation of scale-space computation using finite elements and show that performance is sufficient for computing a scale-space of time-dependent CFD data. Feature tracking, if available, allows to process the information provided by scale-space not just visually but also algorithmically. We present a technique for extending a class of feature extraction schemes by an additional dimension, resulting in an efficient solution of the tracking problem."
]
} |
1708.03132 | 2738472810 | Face hallucination is a domain-specific super-resolution problem with the goal to generate high-resolution (HR) faces from low-resolution (LR) input images. In contrast to existing methods that often learn a single patch-to-patch mapping from LR to HR images and are regardless of the contextual interdependency between patches, we propose a novel Attention-aware Face Hallucination (Attention-FH) framework which resorts to deep reinforcement learning for sequentially discovering attended patches and then performing the facial part enhancement by fully exploiting the global interdependency of the image. Specifically, in each time step, the recurrent policy network is proposed to dynamically specify a new attended region by incorporating what happened in the past. The state (i.e., face hallucination result for the whole image) can thus be exploited and updated by the local enhancement network on the selected region. The Attention-FH approach jointly learns the recurrent policy network and local enhancement network through maximizing the long-term reward that reflects the hallucination performance over the whole image. Therefore, our proposed Attention-FH is capable of adaptively personalizing an optimal searching path for each face image according to its own characteristic. Extensive experiments show our approach significantly surpasses the state-of-the-arts on in-the-wild faces with large pose and illumination variations. | Attention mechanism has been recently applied and has benefited various tasks, such as object proposal @cite_37 , object classification @cite_0 , relationship detection @cite_13 , image captioning @cite_5 and visual question answering @cite_28 . Since contextual information is important for computer vision problems, most of these works attempted to attend multiple regions by formulating their attention procedure as a sequential decision problem. Reinforcement learning technique was introduced to optimize the sequential model with delayed reward. This technique has been applied to face detection @cite_30 and object localization @cite_26 . These methods learned an agent that actively locates the target regions (objects) instead of exhaustively sliding sub-windows on images. For example, Goodrich @cite_30 defined 32 actions to shift the focal point and reward the agent when finding the goal. Caicedo @cite_26 defined an action set that contains several transformations of the bounding box and rewarded the agent if the bounding box is closer to the ground-truth in each step. These two methods both learned an optimal policy to locate the target through Q-learning. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_28",
"@cite_0",
"@cite_5",
"@cite_13"
],
"mid": [
"2017836770",
"",
"2179488730",
"2293453011",
"",
"2950178297",
"2963650529"
],
"abstract": [
"Visual attention is the cognitive process of directing our gaze on one aspect of the visual field while ignoring others. The mainstream approach to modeling focal visual attention involves identifying saliencies in the image and applying a search process to the salient regions. However, such inference schemes commonly fail to accurately capture perceptual attractors, require massive computational effort and, generally speaking, are not biologically plausible. This paper introduces a novel approach to the problem of visual search by framing it as an adaptive learning process. In particular, we devise an approximate optimal control framework, based on reinforcement learning, for actively searching a visual field. We apply the method to the problem of face detection and demonstrate that the technique is both accurate and scalable. Moreover, the foundations proposed here pave the way for extending the approach to other large-scale visual perception problems.",
"",
"We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision.",
"",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Computers still struggle to understand the interdependency of objects in the scene as a whole, e.g., relations between objects or their attributes. Existing methods often ignore global context cues capturing the interactions among different object instances, and can only recognize a handful of types by exhaustively training individual detectors for all possible relationships. To capture such global interdependency, we propose a deep Variation-structured Re-inforcement Learning (VRL) framework to sequentially discover object relationships and attributes in the whole image. First, a directed semantic action graph is built using language priors to provide a rich and compact representation of semantic correlations between object categories, predicates, and attributes. Next, we use a variation-structured traversal over the action graph to construct a small, adaptive action set for each step based on the current state and historical actions. In particular, an ambiguity-aware object mining scheme is used to resolve semantic ambiguity among object categories that the object detector fails to distinguish. We then make sequential predictions using a deep RL framework, incorporating global context cues and semantic embeddings of previously extracted phrases in the state vector. Our experiments on the Visual Relationship Detection (VRD) dataset and the large-scale Visual Genome dataset validate the superiority of VRL, which can achieve significantly better detection results on datasets involving thousands of relationship and attribute types. We also demonstrate that VRL is able to predict unseen types embedded in our action graph by learning correlations on shared graph nodes."
]
} |
1708.03383 | 2750282596 | Human pose estimation and semantic part segmentation are two complementary tasks in computer vision. In this paper, we propose to solve the two tasks jointly for natural multi-person images, in which the estimated pose provides object-level shape prior to regularize part segments while the part-level segments constrain the variation of pose locations. Specifically, we first train two fully convolutional neural networks (FCNs), namely Pose FCN and Part FCN, to provide initial estimation of pose joint potential and semantic part potential. Then, to refine pose joint location, the two types of potentials are fused with a fully-connected conditional random field (FCRF), where a novel segment-joint smoothness term is used to encourage semantic and spatial consistency between parts and joints. To refine part segments, the refined pose and the original part potential are integrated through a Part FCN, where the skeleton feature from pose serves as additional regularization cues for part segments. Finally, to reduce the complexity of the FCRF, we induce human detection boxes and infer the graph inside each box, making the inference forty times faster. Since there's no dataset that contains both part segments and pose labels, we extend the PASCAL VOC part dataset with human pose joints and perform extensive experiments to compare our method against several most recent strategies. We show that on this dataset our algorithm surpasses competing methods by a large margin in both tasks. | Yamaguchi al perform pose estimation and semantic part segmentation sequentially for clothes parsing, using a CRF with low-level features @cite_14 . Ladicky al combine the two tasks in one principled formulation, using also low-level features @cite_18 . Dong al combine the two tasks with a manually designed And-Or graph @cite_34 . These methods demonstrate the complementary properties of the two tasks on relatively simple datasets, but they cannot deal with images with large pose variations or multi-person overlapping, mainly due to the less powerful features they use or the poor quality of their part region proposals. In contrast, our model combines FCNs with graphical models, greatly boosting the representation power of models to handle large pose variation. We also introduce novel part segment consistency terms for pose estimation and novel pose consistency terms for part segmentation, further improving the performance. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_34"
],
"mid": [
"2043217799",
"2074621908",
""
],
"abstract": [
"Our goal is to detect humans and estimate their 2D pose in single images. In particular, handling cases of partial visibility where some limbs may be occluded or one person is partially occluding another. Two standard, but disparate, approaches have developed in the field: the first is the part based approach for layout type problems, involving optimising an articulated pictorial structure, the second is the pixel based approach for image labelling involving optimising a random field graph defined on the image. Our novel contribution is a formulation for pose estimation which combines these two models in a principled way in one optimisation problem and thereby inherits the advantages of both of them. Inference on this joint model finds the set of instances of persons in an image, the location of their joints, and a pixel-wise body part labelling. We achieve near or state of the art results on standard human pose data sets, and demonstrate the correct estimation for cases of self-occlusion, person overlap and image truncation.",
"In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval.",
""
]
} |
1708.03446 | 2748806578 | Lack of sufficient labeled data often limits the applicability of advanced machine learning algorithms to real life problems. However efficient use of Transfer Learning (TL) has been shown to be very useful across domains. TL utilizes valuable knowledge learned in one task (source task), where sufficient data is available, to the task of interest (target task). In biomedical and clinical domain, it is quite common that lack of sufficient training data do not allow to fully exploit machine learning models. In this work, we present two unified recurrent neural models leading to three transfer learning frameworks for relation classification tasks. We systematically investigate effectiveness of the proposed frameworks in transferring the knowledge under multiple aspects related to source and target tasks, such as, similarity or relatedness between source and target tasks, and size of training data for source task. Our empirical results show that the proposed frameworks in general improve the model performance, however these improvements do depend on aspects related to source and target tasks. This dependence then finally determine the choice of a particular TL framework. | The proposed TL frameworks are closely related to the works of @cite_31 @cite_11 @cite_29 . @cite_31 have introduced variety of TL frameworks using gated recurrent neural network (GRU). They have evaluated the proposed frameworks on different sequence labeling tasks, such as PoS tagging and chunking . , similar to the study by for image processing tasks, evaluated CNN and RNN based TL frameworks for sentence classification and sentence pair modeling tasks. have used window based neural network and convolution neural networks for several sequence labeling tasks in the multi-task learning framework. have explored transfer learning for neural machine translation tasks. They have shown significant improvement in many low resource language translation tasks. Their model repurpose the learned model, trained on high resource language translation dataset (source task), for target task. | {
"cite_N": [
"@cite_31",
"@cite_29",
"@cite_11"
],
"mid": [
"2950938254",
"2117130368",
"2310102669"
],
"abstract": [
"Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks.",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP."
]
} |
1708.03453 | 2750408590 | Detection of abnormal BGP events is of great importance to preserve the security and robustness of the Internet inter-domain routing system. In this paper, we propose an anomaly detection framework based on machine learning techniques to identify the anomalous events by training a model for normal BGP-updates and measuring the extent of deviation from the normal model during the abnormal occasions. Our preliminary results show that the features generated and selected are capable of improving the classification results to distinguish between anomalies and normal BGP update messages. Furthermore, the clustering results demonstrate the effectiveness of formed models to detect the similar types of BGP anomalies. In a more general context, an interdisciplinary research is performed between network security and data mining to deal with real-world problems and the achieved results are promising. | Several studies have been conducted on network data to discover BGP abnormal events. In an early line of work @cite_3 proposed an Internet Routing Forensics framework to process BGP routing data and extract rules of abnormal BGP events. They applied data mining techniques and train the framework to learn the rules for different types of BGP anomalies and showed that these rules are effective in detecting the occurrences of the similar events. They utilized a feature selection method to pick the features with the highest information gain. The selected features in some cases match the selected features of our method. However they select 9 features among 35 for all the BGP events, while the set of our selected features vary from dataset to dataset in addition to a number of new features presented based on pairwise correlation of the features. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2163462010"
],
"abstract": [
"Abnormal BGP events such as attacks, misconfigurations, electricity failures, can cause anomalous or pathological routing behavior at either global level or prefix level, and thus must be detected in their early stages. Instead of using ad hoc methods to analyze BGP data, in this paper we introduce an Internet Routing Forensics framework to systematically process BGP routing data, discover rules of abnormal BGP events, and apply these rules to detect the occurrences of these events. In particular, we leverage data mining techniques to train the framework to learn rules of abnormal BGP events, and our results from two case studies show that these rules are effective. In one case study, rules of worm events discovered from the BGP data during the outbreaks of the CodeRed and Nimda worms were able to successfully detect worm impact on BGP when an independent worm, the Slammer, subsequently occurred. Similarly, in another case study, rules of electricity blackout events obtained using BGP data from the 2003 East Coast blackout were able to detect the BGP impact from the Florida blackout caused by Hurricane Frances in 2004."
]
} |
1708.03453 | 2750408590 | Detection of abnormal BGP events is of great importance to preserve the security and robustness of the Internet inter-domain routing system. In this paper, we propose an anomaly detection framework based on machine learning techniques to identify the anomalous events by training a model for normal BGP-updates and measuring the extent of deviation from the normal model during the abnormal occasions. Our preliminary results show that the features generated and selected are capable of improving the classification results to distinguish between anomalies and normal BGP update messages. Furthermore, the clustering results demonstrate the effectiveness of formed models to detect the similar types of BGP anomalies. In a more general context, an interdisciplinary research is performed between network security and data mining to deal with real-world problems and the achieved results are promising. | In a similar approach to @cite_3 , @cite_8 applied data mining algorithms to learn from labeled abnormal data and distinguish unseen BGP events. They have extracted several numerical features for certain time bins and leveraged a number of classifications algorithms, among which SVM had outperformed the others, to detect similar type of events in famous BGP events datasets. In another line of work, @cite_2 automatically formed the hierarchy of abnormal BGP events by devising a clustering method and obtained a set of classification rules which enabled them to label unknown BGP events to the most similar category. However, the proposed methodology is not always able to distinguish the exact category in the lowest level of the hierarchy or even differentiate the normal data from abnormal events. In this study we obtained more precise results in terms of detection of abnormal BGP events compared to the presented results in @cite_2 . Furthermore, @cite_0 proposed an instant-learning framework recognizing anomalies based on their deviation from the normal dynamics of BGP updates. They applied wavelet transforms to reveal temporal structure of update messages and utilize clustering algorithms to distinguish between normal profiles and outliers. | {
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_3",
"@cite_8"
],
"mid": [
"2107509798",
"52812074",
"2163462010",
"2043409701"
],
"abstract": [
"Detecting anomalous BGP-route advertisements is crucial for improving the security and robustness of the Internet's interdomain-routing system. In this paper, we propose an instance-learning framework that identifies anomalies based on deviations from the \"normal\" BGP-update dynamics for a given destination prefix and across prefixes. We employ wavelets for a systematic, multi-scaled analysis that avoids the \"magic numbers\" (e.g., for grouping related update messages) needed in previous approaches to BGP-anomaly detection. Our preliminary results show that the update dynamics are generally consistent across prefixes and time. Only a few prefixes differ from the majority, and most prefixes exhibit similar behavior across time. This small set of abnormal prefixes and time intervals may be further examined to determine the source of anomalous behavior. In particular, we observe that many of the unusual prefixes are unstable prefixes that experience frequent routing changes.",
"Abnormal events, such as security attacks, misconfigurations, or electricity failures, could have severe consequences toward the normal operation of the Border Gateway Protocol (BGP) that is in charge of the delivery of packets between different autonomous domains, a key operation for the Internet to function. Unfortunately, it has been a difficult task for network security researchers and engineers to classify and detect these events. In our previous work, we have shown that with classification (which relies on the labeling with domain knowledge from BGP experts), it is feasible to effectively detect and distinguish some worms and blackouts from normal BGP behaviors. In this paper, we move one important step forward—we show that we can automatically detect and classify between different abnormal BGP events based on a hierarchy discovered by clustering. As a systematic application of data mining, we devise a clustering method based on normalized BGP data that forms a tree-like hierarchy of abnormal BGP event classes. We then obtain a set of classification rules for each class (node) in the hierarchy, thus able to label unknown BGP data to a closest class. Our method works even as the BGP dynamics evolve over time, as shown in our experiments with seven different abnormal events during a four-year period. Our work, in a more general context, shows it is promising to conduct an interdisciplinary research between network security and data mining in solving real-world problems.",
"Abnormal BGP events such as attacks, misconfigurations, electricity failures, can cause anomalous or pathological routing behavior at either global level or prefix level, and thus must be detected in their early stages. Instead of using ad hoc methods to analyze BGP data, in this paper we introduce an Internet Routing Forensics framework to systematically process BGP routing data, discover rules of abnormal BGP events, and apply these rules to detect the occurrences of these events. In particular, we leverage data mining techniques to train the framework to learn rules of abnormal BGP events, and our results from two case studies show that these rules are effective. In one case study, rules of worm events discovered from the BGP data during the outbreaks of the CodeRed and Nimda worms were able to successfully detect worm impact on BGP when an independent worm, the Slammer, subsequently occurred. Similarly, in another case study, rules of electricity blackout events obtained using BGP data from the 2003 East Coast blackout were able to detect the BGP impact from the Florida blackout caused by Hurricane Frances in 2004.",
"Abnormal events such as large scale power outages, misconfigurations, and worm attacks can affect the global routing infrastructure and consequently create regional or global Internet service interruptions. As a result, early detection of abnormal events is of critical importance. In this study we present a framework based on data mining algorithms that are applied to anomaly detection on global routing infrastructure. To show the applicability of our framework, we conduct extensive experiments with a variety of abnormal events and classification algorithms. Our results demonstrate that when we train our system with abnormal events including worm attacks, power supply outages, submarine cable cuts, and misconfigurations, we can detect a similar type of event as it happens."
]
} |
1708.03246 | 2742658812 | In recent years supervised representation learning has provided state of the art or close to the state of the art results in semantic analysis tasks including ranking and information retrieval. The core idea is to learn how to embed items into a latent space such that they optimize a supervised objective in that latent space. The dimensions of the latent space have no clear semantics, and this reduces the interpretability of the system. For example, in personalization models, it is hard to explain why a particular item is ranked high for a given user profile. We propose a novel model of representation learning called Supervised Explicit Semantic Analysis (SESA) that is trained in a supervised fashion to embed items to a set of dimensions with explicit semantics. The model learns to compare two objects by representing them in this explicit space, where each dimension corresponds to a concept from a knowledge base. This work extends Explicit Semantic Analysis (ESA) with a supervised model for ranking problems. We apply this model to the task of Job-Profile relevance in LinkedIn in which a set of skills defines our explicit dimensions of the space. Every profile and job are encoded to this set of skills their similarity is calculated in this space. We use RNNs to embed text input into this space. In addition to interpretability, our model makes use of the web-scale collaborative skills data that is provided by users for each LinkedIn profile. Our model provides state of the art result while it remains interpretable. | Explicit Semantic Analysis (ESA) @cite_15 tries to address this issue. It represents words as vectors in which each dimension corresponds to a knowledge base entity that is usually a Wikipedia article. It builds an inverted index of word frequencies in Wikipedia pages; each word is represented as a vector of the size of Wikipedia articles, such that the weight of each dimension is the word frequency in the corresponding Wikipedia article. To get a representation of a document, one can average the representations of all the words in that document. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2120779048"
],
"abstract": [
"Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r = 0.56 to 0.75 for individual words and from r = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users."
]
} |
1708.03186 | 2743897998 | In this paper, we discuss different methods which use meta information and richer context that may accompany source language input to improve machine translation quality. We focus on category information of input text as meta information, but the proposed methods can be extended to all textual and non-textual meta information that might be available for the input text or automatically predicted using the text content. The main novelty of this work is to use state-of-the-art neural network methods to tackle this problem within a statistical machine translation (SMT) framework. We observe translation quality improvements up to 3 in terms of BLEU score in some text categories. | Phrase-based MT models have an intrinsic problem in using large(r) context and long range dependencies, since these models are bounded by phrase context. Therefore, many research publications target solving these well-known problems of phrase-based MT. Here, we focus on pioneering works that are most comparable to this work. One research line of using larger context in MT is to use sentence context for word sense disambiguation . proposed the idea of employing a discriminative lexical model that uses sentence level information to predict a target word, this idea has been extended and enhanced in a recent work . @cite_0 have proposed to extend the discriminative model to also use the target prefix to predict the next target word, and also they enhance the model to calculate target word probabilities on-line during the search using a fast and efficient classification method based on the Vowpal Wabbit http: hunch.net vw toolkit. proposed a neural joint lexical model that also employs a larger context around a given source word including previous generated target words, to predict the corresponding target word probability using a feed-forward neural network as the classifier. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2462993019"
],
"abstract": [
"Discriminative translation models utilizing source context have been shown to help statistical machine translation performance. We propose a novel extension of this work using target context information. Surprisingly, we show that this model can be efficiently integrated directly in the decoding process. Our approach scales to large training data sizes and results in consistent improvements in translation quality on four language pairs. We also provide an analysis comparing the strengths of the baseline source-context model with our extended source-context and target-context model and we show that our extension allows us to better capture morphological coherence. Our work is freely available as part of Moses."
]
} |
1708.03513 | 2747715470 | Static malware analysis is well-suited to endpoint anti-virus systems as it can be conducted quickly by examining the features of an executable piece of code and matching it to previously observed malicious code. However, static code analysis can be vulnerable to code obfuscation techniques. Behavioural data collected during file execution is more difficult to obfuscate, but takes a relatively long time to capture - typically up to 5 minutes, meaning the malicious payload has likely already been delivered by the time it is detected. In this paper we investigate the possibility of predicting whether or not an executable is malicious based on a short snapshot of behavioural data. We find that an ensemble of recurrent neural networks are able to predict whether an executable is malicious or benign within the first 5 seconds of execution with 94 accuracy. This is the first time general types of malicious file have been predicted to be malicious during execution rather than using a complete activity log file post-execution, and enables cyber security endpoint protection to be advanced to use behavioural data for blocking malicious payloads rather than detecting them post-execution and having to repair the damage. | Static data, derived directly from code, can be collected quickly. Though signature-based methods fail to detect obfuscated or entirely new malware, researchers have extracted other features for static detection. Saxe and Berlin @cite_22 distinguish malware from benignware using a deep feed-forward neural network with a true-positive rate of 95.2 Methods using dynamic data assume that malware must enact the behaviours necessary to achieve their aims. Typically, these approaches capture behaviours such as API calls to the operating system kernel. @cite_7 use RNNs to extract features from 5 minutes of API call log sequences which are then fed into a convolutional neural network to obtain 0.96 AUC score with a dataset of 170 samples. @cite_31 compare machine learning algorithms trained on API calls and achieve an accuracy of 96.8 | {
"cite_N": [
"@cite_31",
"@cite_22",
"@cite_7"
],
"mid": [
"2085807744",
"1893133781",
"2508015754"
],
"abstract": [
"The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naive Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9 , a false positive rate of 2.4 , a precision of 97.3 , and an accuracy of 96.8 . In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"In this paper we introduce a deep neural network based malware detection system that Invincea has developed, which achieves a usable detection rate at an extremely low false positive rate and scales to real world training example volumes on commodity hardware. We show that our system achieves a 95 detection rate at 0.1 false positive rate (FPR), based on more than 400,000 software binaries sourced directly from our customers and internal malware databases. In addition, we describe a non-parametric method for adjusting the classifier’s scores to better represent expected precision in the deployment environment. Our results demonstrate that it is now feasible to quickly train and deploy a low resource, highly accurate machine learning classification model, with false positive rates that approach traditional labor intensive expert rule based malware detection, while also detecting previously unseen malware missed by these traditional approaches. Since machine learning models tend to improve with larger datasizes, we foresee deep neural network classification models gaining in importance as part of a layered network defense strategy in coming years.",
"Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case."
]
} |
1708.03402 | 2960945709 | In a distributed storage systems (DSS) with @math systematic nodes, robustness against node failure is commonly provided by storing redundancy in a number of other nodes and performing repair mechanism to reproduce the content of the failed nodes. Efficiency is then achieved by minimizing the storage overhead and the amount of data transmission required for data reconstruction and repair, provided by coding solutions such as regenerating codes [1]. Common explicit regenerating code constructions enable efficient repair through accessing a predefined number, @math , of arbitrary chosen available nodes, namely helpers. In practice, however, the state of the system dynamically changes based on the request load, the link traffic, etc., and the parameters which optimize system's performance vary accordingly. It is then desirable to have coding schemes which are able to operate optimally under a range of different parameters simultaneously. Specifically, adaptivity in the number of helper nodes for repair is of interest. While robustness requires capability of performing repair with small number of helpers, it is desirable to use as many helpers as available to reduce the transmission delay and total repair traffic. In this work we focus on the minimum storage regenerating (MSR) codes, where each node is supposed to store @math information units, and the source data of size @math could be recovered from any arbitrary set of @math nodes. We introduce a class MSR codes that realize optimal repair bandwidth simultaneously with a set of different choices for the number of helpers, namely @math . Our coding scheme follows the Product Matrix (PM) framework introduced in [2], and could be considered as a generalization of the PM MSR code presented in [2], such that any @math helpers can perform an optimal repair. ... | In this work, as presented in Theorem , we address the exact repair bandwidth adaptive MSR code design problem with small subpacketization level. While both @cite_29 and @cite_46 follow the approach introduced in @cite_49 , which is based on the design of parity check equations, in this work we follow the (PM) framework introduced in @cite_32 . Comparing ) with ), one could see that the presented scheme exponentially reduces the required values of @math (and @math ). However, this scheme works only for @math . As a result, the construction presented in this work only solves the problem in low coding rates. It is worth mentioning that in applications such as general-purpose storage arrays, providing high reliability and fast degraded reads are more important than maximizing the coding rate @cite_31 . However, the design of high-rate bandwidth adaptive MSR codes with small @math and @math still remains a challenging important problem for big data storage systems such as Hadoop. | {
"cite_N": [
"@cite_31",
"@cite_29",
"@cite_32",
"@cite_49",
"@cite_46"
],
"mid": [
"154253821",
"2150676586",
"2150777202",
"2102047288",
"2963754880"
],
"abstract": [
"Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.",
"The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.",
"Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1].",
"We consider exact repair of failed nodes in maximum distance separable (MDS) code based distributed storage systems. It is well known that an (n, k) MDS code can tolerate failure (erasure) of up to n − k storage disks, when the code is used to store k information elements over n distributed storage disks. The focus of this paper is optimal recovery, in terms of repair bandwidth - the amount of data to be downloaded to repair a failed node - for a single failed node. When a single node fails, it has been previously shown by Dimakis et. al. that the amount of repair bandwidth is at least equation units, when each storage disk stores ℒ units of data. The achievability of this lower bound of equation units, for arbitrary values of (n, k); has been shown previously using asymptotic code constructions based on asymptotic interference alignment. However, the existence of finite codes satisfying this lower bound has been shown only for specific regimes of (n, k) and their existence for arbitrary values of (n, k) remained open. In this paper, we provide the first known construction of a finite code for arbitrary (n, k), which can repair a single failed systematic node by downloading exactly equation units of data. The code that we construct is based on permutation matrices and hence termed the Permutation Code.",
"Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with @math information nodes and @math parity nodes is used to encode data in a distributed storage system. It is known that if @math out of the @math nodes are inaccessible and @math surviving (helper) nodes are used to recover the lost data, then we need to download at least @math fraction of the data stored in each of the helper nodes ( , 2010 and , 2013). If this lower bound is achieved for the repair of any @math erased nodes from any @math helper nodes, we say that the MDS code has the @math -optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any @math and @math , we present two explicit constructions of MDS array codes with the @math -optimal repair property for all @math and @math simultaneously. Codes in the first family can be constructed over any base field @math as long as @math , where @math . The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field @math as long as @math . Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | Generative methods fit a predefined hand model to the input data using optimization algorithms to obtain the optimized hand pose, such as PSO (particle swarm optimization) @cite_73 , ICP (Iterative Closest Point) @cite_69 and their combination (PSO-ICP) @cite_65 . Hand-crafted energy functions that describe the distance between the hand model and input image are utilized in prior works, such as golden energy @cite_73 and silver energy @cite_7 . Several kinds of hand model have been adopted, including sphere model @cite_65 , sphere-meshes model @cite_43 , cylinder model @cite_69 and mesh model @cite_73 . Generative methods are robust for self-occlusive areas or missing areas and ensure to output plausible hand pose. However, they need a complex and time-consuming optimizing procedure and are likely to trap into local optimizations. | {
"cite_N": [
"@cite_69",
"@cite_7",
"@cite_65",
"@cite_43",
"@cite_73"
],
"mid": [
"1923747199",
"2218414108",
"1990947293",
"2552247836",
"2023633446"
],
"abstract": [
"We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.",
"We address the problem of hand pose estimation, formulated as an inverse problem. Typical approaches optimize an energy function over pose parameters using a 'black box' image generation procedure. This procedure knows little about either the relationships between the parameters or the form of the energy function. In this paper, we show that we can significantly improving upon black box optimization by exploiting high-level knowledge of the structure of the parameters and using a local surrogate energy function. Our new framework, hierarchical sampling optimization, consists of a sequence of predictors organized into a kinematic hierarchy. Each predictor is conditioned on its ancestors, and generates a set of samples over a subset of the pose parameters. The highly-efficient surrogate energy is used to select among samples. Having evaluated the full hierarchy, the partial pose samples are concatenated to generate a full-pose hypothesis. Several hypotheses are generated using the same procedure, and finally the original full energy function selects the best result. Experimental evaluation on three publically available datasets show that our method is particularly impressive in low-compute scenarios where it significantly outperforms all other state-of-the-art methods.",
"We present a realtime hand tracking system using a depth sensor. It tracks a fully articulated hand under large viewpoints in realtime (25 FPS on a desktop without using a GPU) and with high accuracy (error below 10 mm). To our knowledge, it is the first system that achieves such robustness, accuracy, and speed simultaneously, as verified on challenging real data. Our system is made of several novel techniques. We model a hand simply using a number of spheres and define a fast cost function. Those are critical for realtime performance. We propose a hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy. We present new finger detection and hand initialization methods that greatly enhance the robustness of tracking.",
"Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.",
"We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | Although our proposed Pose-REN follows the idea of feature region ensemble as REN @cite_55 , there are several essential differences between Pose-REN and REN @cite_55 : 1) Different from REN that uses grid region feature extraction, the proposed Pose-REN fully exploits an initially estimated hand pose as the guided information to extract more representative features from CNN, which is shown to have a large impact for hand pose estimation problem, as discussed in . 2) Instead of simple feature fusion as adopted in REN, our Pose-REN presents a structured region ensemble strategy that better models the connections and constraints between different joints in the hand. 3) The Pose-REN is a common framework that can easily be compatible with any existing methods (for example, Feedback @cite_29 , DeepModel @cite_27 etc.) by using them to produce initial estimations for Pose-REN. | {
"cite_N": [
"@cite_27",
"@cite_55",
"@cite_29"
],
"mid": [
"2466381304",
"2587626861",
"2210697964"
],
"abstract": [
"Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance.",
"Hand pose estimation from monocular depth images is an important and challenging problem for human-computer interaction. Recently deep convolutional networks (ConvNet) with sophisticated design have been employed to address it, but the improvement over traditional methods is not so apparent. To promote the performance of directly 3D coordinate regression, we propose a tree-structured Region Ensemble Network (REN), which partitions the convolution outputs into regions and integrates the results from multiple regressors on each regions. Compared with multi-model ensemble, our model is completely end-to-end training. The experimental results demonstrate that our approach achieves the best performance among state-of-the-arts on two public datasets.",
"We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | @cite_32 proposed a method to iteratively refine the hand pose using hand-crafted 3D pose index features that are invariant to viewpoint transformation. @cite_31 proposed a post-refinement method to refine each joint independently using multiscale input regions centered on the initially estimated hand joints. These works have to train multi models for refinement and independently predict different parts of hand joints while our proposed needs only one model to iteratively improve the estimated hand pose. | {
"cite_N": [
"@cite_31",
"@cite_32"
],
"mid": [
"1702419847",
"1928739709"
],
"abstract": [
"We introduce and evaluate several architectures for Convolutional Neural Networks to predict the 3D joint locations of a hand given a depth map. We first show that a prior on the 3D pose can be easily introduced and significantly improves the accuracy and reliability of the predictions. We also show how to use context efficiently to deal with ambiguities between fingers. These two contributions allow us to significantly outperform the state-of-the-art on several challenging benchmarks, both in terms of accuracy and computation times.",
"We extends the previous 2D cascaded object pose regression work [9] in two aspects so that it works better for 3D articulated objects. Our first contribution is 3D pose-indexed features that generalize the previous 2D parameterized features and achieve better invariance to 3D transformations. Our second contribution is a principled hierarchical regression that is adapted to the articulated object structure. It is therefore more accurate and faster. Comprehensive experiments verify the state-of-the-art accuracy and efficiency of the proposed approach on the challenging 3D hand pose estimation problem, on a public dataset and our new dataset."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | @cite_29 presented a feedback loop framework for hand pose estimation. One discriminative network is used to produce initial hand pose. A depth image is then generated from the initial hand pose using a generative CNN and an updater network improves the hand pose by comparing the synthetic depth image and input depth image. However, the depth synthetic network is highly sensitive to the annotation errors of hand poses. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2210697964"
],
"abstract": [
"We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | @cite_33 integrated cascaded and hierarchical regression into a CNN framework using spatial attention mechanism. The partial hand joints are iteratively refined using transformed features generated by spatial attention module. In their method, the features in cascaded framework are generated by a initial CNN and remain unchanged in each refinement stage except for the spatial transformation. In our proposed method, feature maps are updated in each cascaded stage using an end-to-end framework, which will help to learn more effective features for hand pose estimation. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2952561223"
],
"abstract": [
"Discriminative methods often generate hand poses kinematically implausible, then generative methods are used to correct (or verify) these results in a hybrid method. Estimating 3D hand pose in a hierarchy, where the high-dimensional output space is decomposed into smaller ones, has been shown effective. Existing hierarchical methods mainly focus on the decomposition of the output space while the input space remains almost the same along the hierarchy. In this paper, a hybrid hand pose estimation method is proposed by applying the kinematic hierarchy strategy to the input space (as well as the output space) of the discriminative method by a spatial attention mechanism and to the optimization of the generative method by hierarchical Particle Swarm Optimization (PSO). The spatial attention mechanism integrates cascaded and hierarchical regression into a CNN framework by transforming both the input(and feature space) and the output space, which greatly reduces the viewpoint and articulation variations. Between the levels in the hierarchy, the hierarchical PSO forces the kinematic constraints to the results of the CNNs. The experimental results show that our method significantly outperforms four state-of-the-art methods and three baselines on three public benchmarks."
]
} |
1708.03416 | 2750326862 | Abstract Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as pose guided structured region ensemble network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms. | @cite_30 proposed a hierarchical recurrent neural network (RNN) for skeleton-based human action recognition. The whole skeleton is divided into five parts and fed into different branches of the RNN. Different parts of skeleton are hierarchically fused to generated higher-level representations. @cite_22 proposed a tree-shape structure of CNN which regresses local poses at different branches and fuses all features in the last layer. In their structure, features of different partial poses are learned independently except for sharing features in very early layers. In contrast, our method shares features in the convolutional layers for all joints and hierarchically fuses different regions from feature maps to finally estimate the hand pose. The shared features enables better representation of hand pose and the hierarchical structure of feature fusion can better model the correlation of different hand joints. | {
"cite_N": [
"@cite_30",
"@cite_22"
],
"mid": [
"1950788856",
"2619078928"
],
"abstract": [
"Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which make hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Finally, we introduce a non-rigid data augmentation approach to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations and dependencies, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and SyntheticHand datasets."
]
} |
1708.03276 | 2744710511 | Binarization of degraded historical manuscript images is an important pre-processing step for many document processing tasks. We formulate binarization as a pixel classification learning task and apply a novel Fully Convolutional Network (FCN) architecture that operates at multiple image scales, including full resolution. The FCN is trained to optimize a continuous version of the Pseudo F-measure metric and an ensemble of FCNs outperform the competition winners on 4 of 7 DIBCO competitions. This same binarization technique can also be applied to different domains such as Palm Leaf Manuscripts with good performance. We analyze the performance of the proposed model w.r.t. the architectural hyperparameters, size and diversity of training data, and the input features chosen. | Pastor- explored the use of Convolutional Neural Networks (CNN) to classify each pixel given its 19x19 neighborhood of intesity values @cite_6 . They report an FM of 87.74 on DIBCO 2013 compared to 92.70 achieved by the competition winner. trained an Extremely Randomized Trees classifier using a wide variety of statistical and heuristic features extracted from various neighborhoods around the pixel of interest. Because the number of background pixels greatly exceeds the number of foreground pixels, they heuristically sampled a training set to balance both classes. In contrast, we directly optimize the Pseudo F-measure instead of determining the precision recall tradeoff through sampling. | {
"cite_N": [
"@cite_6"
],
"mid": [
"802468761"
],
"abstract": [
"Convolutional Neural Networks have systematically shown good performance in Computer Vision and in Handwritten Text Recognition tasks. This paper proposes the use of these models for document image binarization. The main idea is to classify each pixel of the image into foreground and background from a sliding window centered at the pixel to be classified. An experimental analysis on the effect of sensitive parameters and some working topologies are proposed using two different corpora, of very different properties: DIBCO and Santgall."
]
} |
1708.03276 | 2744710511 | Binarization of degraded historical manuscript images is an important pre-processing step for many document processing tasks. We formulate binarization as a pixel classification learning task and apply a novel Fully Convolutional Network (FCN) architecture that operates at multiple image scales, including full resolution. The FCN is trained to optimize a continuous version of the Pseudo F-measure metric and an ensemble of FCNs outperform the competition winners on 4 of 7 DIBCO competitions. This same binarization technique can also be applied to different domains such as Palm Leaf Manuscripts with good performance. We analyze the performance of the proposed model w.r.t. the architectural hyperparameters, size and diversity of training data, and the input features chosen. | proposed FCNs for the more general semantic segmentation problem in natural images @cite_16 . Both and combined Conditional Random Fields (CRF) with FCNS to improve localization and consistency of predictions @cite_23 @cite_3 . These FCNs heavily downsample inputs, which results in poor localization. Thus prior FCNs are not good models for document image binarization. | {
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_23"
],
"mid": [
"2952632681",
"1923697677",
""
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.