aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1603.08592 | 2334314520 | Tracking many vehicles in wide coverage aerial imagery is crucial for understanding events in a large field of view. Most approaches aim to associate detections from frame differencing into tracks. However, slow or stopped vehicles result in long-term missing detections and further cause tracking discontinuities. Relying merely on appearance clue to recover missing detections is difficult as targets are extremely small and in grayscale. In this paper, we address the limitations of detection association methods by coupling it with a local context tracker (LCT), which does not rely on motion detections. On one hand, our LCT learns neighboring spatial relation and tracks each target in consecutive frames using graph optimization. It takes the advantage of context constraints to avoid drifting to nearby targets. We generate hypotheses from sparse and dense flow efficiently to keep solutions tractable. On the other hand, we use detection association strategy to extract short tracks in batch processing. We explicitly handle merged detections by generating additional hypotheses from them. Our evaluation on wide area aerial imagery sequences shows significant improvement over state-of-the-art methods. | Using context information is an appealing strategy for tracking against distracters and occlusion. This concept has been applied on single target tracking @cite_20 @cite_11 @cite_14 . Recently, @cite_34 incorporates spatial constraints with tracking-by-detection. Nevertheless, these trackers require accurate annotation for initialization. It is not trivial to directly apply these approaches in WAMI, where the number of targets varies with time and perfect initialization is not available. | {
"cite_N": [
"@cite_34",
"@cite_14",
"@cite_20",
"@cite_11"
],
"mid": [
"2049457635",
"2051985940",
"2023640470",
"1986426688"
],
"abstract": [
"Model-free trackers can track arbitrary objects based on a single (bounding-box) annotation of the object. Whilst the performance of model-free trackers has recently improved significantly, simultaneously tracking multiple objects with similar appearance remains very hard. In this paper, we propose a new multi-object model-free tracker (using a tracking-by-detection framework) that resolves this problem by incorporating spatial constraints between the objects. The spatial constraints are learned along with the object detectors using an online structured SVM algorithm. The experimental evaluation of our structure-preserving object tracker (SPOT) reveals substantial performance improvements in multi-object tracking. We also show that SPOT can improve the performance of single-object trackers by simultaneously tracking different parts of the object. Moreover, we show that SPOT can be used to adapt generic, model-based object detectors during tracking to tailor them towards a specific instance of that object.",
"Enormous uncertainties in unconstrained environments lead to a fundamental dilemma that many tracking algorithms have to face in practice: Tracking has to be computationally efficient, but verifying whether or not the tracker is following the true target tends to be demanding, especially when the background is cluttered and or when occlusion occurs. Due to the lack of a good solution to this problem, many existing methods tend to be either effective but computationally intensive by using sophisticated image observation models or efficient but vulnerable to false alarms. This greatly challenges long-duration robust tracking. This paper presents a novel solution to this dilemma by considering the context of the tracking scene. Specifically, we integrate into the tracking process a set of auxiliary objects that are automatically discovered in the video on the fly by data mining. Auxiliary objects have three properties, at least in a short time interval: 1) persistent co-occurrence with the target, 2) consistent motion correlation to the target, and 3) easy to track. Regarding these auxiliary objects as the context of the target, the collaborative tracking of these auxiliary objects leads to efficient computation as well as strong verification. Our extensive experiments have exhibited exciting performance in very challenging real-world testing cases.",
"Visual tracking in unconstrained environments is very challenging due to the existence of several sources of varieties such as changes in appearance, varying lighting conditions, cluttered background, and frame-cuts. A major factor causing tracking failure is the emergence of regions having similar appearance as the target. It is even more challenging when the target leaves the field of view (FoV) leading the tracker to follow another similar object, and not reacquire the right target when it reappears. This paper presents a method to address this problem by exploiting the context on-the-fly in two terms: Distracters and Supporters. Both of them are automatically explored using a sequential randomized forest, an online template-based appearance model, and local features. Distracters are regions which have similar appearance as the target and consistently co-occur with high confidence score. The tracker must keep tracking these distracters to avoid drifting. Supporters, on the other hand, are local key-points around the target with consistent co-occurrence and motion correlation in a short time span. They play an important role in verifying the genuine target. Extensive experiments on challenging real-world video sequences show the tracking improvement when using this context information. Comparisons with several state-of-the-art approaches are also provided.",
"Objects are usually embedded into context. Visual context has been successfully used in object detection tasks, however, it is often ignored in object tracking. We propose a method to learn supporters which are, be it only temporally, useful for determining the position of the object of interest. Our approach exploits the General Hough Transform strategy. It couples the supporters with the target and naturally distinguishes between strongly and weakly coupled motions. By this, the position of an object can be estimated even when it is not seen directly (e.g., fully occluded or outside of the image region) or when it changes its appearance quickly and significantly. Experiments show substantial improvements in model-free tracking as well as in the tracking of “virtual” points, e.g., in medical applications."
]
} |
1603.08592 | 2334314520 | Tracking many vehicles in wide coverage aerial imagery is crucial for understanding events in a large field of view. Most approaches aim to associate detections from frame differencing into tracks. However, slow or stopped vehicles result in long-term missing detections and further cause tracking discontinuities. Relying merely on appearance clue to recover missing detections is difficult as targets are extremely small and in grayscale. In this paper, we address the limitations of detection association methods by coupling it with a local context tracker (LCT), which does not rely on motion detections. On one hand, our LCT learns neighboring spatial relation and tracks each target in consecutive frames using graph optimization. It takes the advantage of context constraints to avoid drifting to nearby targets. We generate hypotheses from sparse and dense flow efficiently to keep solutions tractable. On the other hand, we use detection association strategy to extract short tracks in batch processing. We explicitly handle merged detections by generating additional hypotheses from them. Our evaluation on wide area aerial imagery sequences shows significant improvement over state-of-the-art methods. | Most WAMI tracking approaches focus on associating noisy motion detections into tracks. Motion detections are typically acquired by applying frame differencing methods to stabilized imagery. Perera al @cite_23 propose to first generate short tracks using nearest-neighbor strategy and then handle split-then-merge situations in track linking. Reilly al @cite_47 formulate the data association problem in Hungarian algorithm. Prokaj al @cite_19 extract tracklet from detections by Bayesian network. Shi al @cite_0 associate motion detections by rank-1 tensor optimization. Keck al @cite_35 provide a real-time implementation for tracking based on multiple hypothesis tracking. The object-centric association method is proposed in @cite_21 to relax the one-to-one matching assumption for motion detections. Additional context constraints are used to alleviate track intersection. Chen and Medioni @cite_26 extract tracklets by finding the longest path through detection trees. These above trackers mainly rely on motion detections. Therefore, they cannot recover long-term missing detections from slow or stopped vehicles. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_21",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_47"
],
"mid": [
"2072054300",
"1932446064",
"2046752030",
"2018776447",
"2119320643",
"2120079785",
"1491152520"
],
"abstract": [
"Live wide-area persistent surveillance (WAPS) systems must provide effective multi-target tracking on downlinked video streams in real-time. This paper presents the first published aerial tracking system that is documented to process over 100 megapixels per second. The implementation addresses the challenges with the mosaicked, low-resolution, grayscale NITF imagery provided by most currently fielded WAPS platforms and the flexible computation architecture required to provide real-time performance. This paper also provides ground-truth for repeatable evaluation of wide-area persistent surveillance on a 2009 dataset collected by AFRL [1] that is available to the public as well as a quantitative analysis of this real-time implementation. To our knowledge, this is the only publication that (1) provides details of a real-time implementation for detection and tracking in (2) mosaicked, composed imagery from a fielded WAPS sensor, and (3) provides annotation data and quantitative analysis for repeatable WAPS tracking experimentation in the computer vision community.",
"We propose a novel approach to track multiple targets with weak appearance in low frame rate wide area aerial videos. In real world scenarios, non-linear motion such as sharp turns after slowing down or U-shape trajectories occur. Performing accurate matching without introducing undesired trajectories is very challenging. To tackle various motion patterns, we sequentially optimizing an objective function and propagating motion information at each time step in a sliding temporal window. We show how to exploit an optimal short track (tracklet) for each detection in the first frame of each window using dynamic programming. Tracklets obtained in the window are then associated with existing tracks iteratively to form final tracks. We reduce false alarms in background subtraction motion detection with the aid of optical flow. Our system is tested on two challenging datasets. The quantitative evaluation on a long annotated aerial video sequence shows that the proposed approach outperforms state-of-the-art detection and tracking methods in all common axes of evaluation metrics.",
"This paper presents a novel framework for tracking thousands of vehicles in high resolution, low frame rate, multiple camera aerial videos. The proposed algorithm avoids the pitfalls of global minimization of data association costs and instead maintains multiple object-centric associations for each track. Representation of object state in terms of many to many data associations per track is proposed and multiple novel constraints are introduced to make the association problem tractable while allowing sharing of detections among tracks. Weighted hypothetical measurements are introduced to better handle occlusions, mis-detections and split or merged detections. A two-frame differencing method is presented which performs simultaneous moving object detection in both. Two novel contextual constraints of vehicle following model, and discouragement of track intersection and merging are also proposed. Extensive experiments on challenging, ground truthed data sets are performed to show the feasibility and superiority of the proposed approach. Results of quantitative comparison with existing approaches are presented, and the efficacy of newly introduced constraints is experimentally established. The proposed algorithm performs better and faster than global, 1---1 data association methods.",
"In this paper we formulate multi-target tracking (MTT) as a rank-1 tensor approximation problem and propose an l1 norm tensor power iteration solution. In particular, a high order tensor is constructed based on trajectories in the time window, with each tensor element as the affinity of the corresponding trajectory candidate. The local assignment variables are the l1 normalized vectors, which are used to approximate the rank-1 tensor. Our approach provides a flexible and effective formulation where both pairwise and high-order association energies can be used expediently. We also show the close relation between our formulation and the multi-dimensional assignment (MDA) model. To solve the optimization in the rank-1 tensor approximation, we propose an algorithm that iteratively powers the intermediate solution followed by an l1 normalization. Aside from effectively capturing high-order motion information, the proposed solver runs efficiently with proved convergence. The experimental validations are conducted on two challenging datasets and our method demonstrates promising performances on both.",
"Recent work on multi-object tracking has shown the promise of tracklet-based methods. In this work we present a method which infers tracklets then groups them into tracks. It overcomes some of the disadvantages of existing methods, such as the use of heuristics or non-realistic constraints. The main idea is to formulate the data association problem as inference in a set of Bayesian networks. This avoids exhaustive evaluation of data association hypotheses, provides a confidence estimate of the solution, and handles split-merge observations. Consistency of motion and appearance is the driving force behind finding the MAP data association estimate. The computed tracklets are then used in a complete multi-object tracking algorithm, which is evaluated on a vehicle tracking task in an aerial surveillance context. Very good performance is achieved on challenging video sequences. Track fragmentation is nearly non-existent, and false alarm rates are low.",
"A fundamental requirement for effective automated analysis of object behavior and interactions in video is that each object must be consistently identified over time. This is difficult when the objects are often occluded for long periods: nearly all tracking algorithms will terminate a track with loss of identity on a long gap. The problem is further confounded by objects in close proximity, tracking failures due to shadows, etc. Recently, some work has been done to address these issues using higher level reasoning, by linking tracks from multiple objects over long gaps. However, these efforts have assumed a one-to-one correspondence between tracks on either side of the gap. This is often not true in real scenarios of interest, where the objects are closely spaced and dynamically occlude each other, causing trackers to merge objects into single tracks. In this paper, we show how to efficiently handle splitting and merging during track linking. Moreover, we show that we can maintain the identities of objects that merge together and subsequently split. This enables the identity of objects to be maintained throughout long sequences with difficult conditions. We demonstrate our approach on a highly challenging, oblique-view video sequence of dense traffic of a highway interchange. We successfully track the large majority of the hundreds of moving vehicles in the scene, many in close proximity, through long occlusions and shadows.",
"In this paper, we tackle the problem of object detection and tracking in a new and challenging domain of wide area surveillance. This problem poses several challenges: large camera motion, strong parallax, large number of moving objects, small number of pixels on target, single channel data and low framerate of video. We propose a method that overcomes these challenges and evaluate it on CLIF dataset. We use median background modeling which requires few frames to obtain a workable model. We remove false detections due to parallax and registration errors using gradient information of the background image. In order to keep complexity of the tracking problem manageable, we divide the scene into grid cells, solve the tracking problem optimally within each cell using bipartite graph matching and then link tracks across cells. Besides tractability, grid cells allow us to define a set of local scene constraints such as road orientation and object context. We use these constraints as part of cost function to solve the tracking problem which allows us to track fast-moving objects in low framerate videos. In addition to that, we manually generated groundtruth for four sequences and performed quantitative evaluation of the proposed algorithm."
]
} |
1603.08592 | 2334314520 | Tracking many vehicles in wide coverage aerial imagery is crucial for understanding events in a large field of view. Most approaches aim to associate detections from frame differencing into tracks. However, slow or stopped vehicles result in long-term missing detections and further cause tracking discontinuities. Relying merely on appearance clue to recover missing detections is difficult as targets are extremely small and in grayscale. In this paper, we address the limitations of detection association methods by coupling it with a local context tracker (LCT), which does not rely on motion detections. On one hand, our LCT learns neighboring spatial relation and tracks each target in consecutive frames using graph optimization. It takes the advantage of context constraints to avoid drifting to nearby targets. We generate hypotheses from sparse and dense flow efficiently to keep solutions tractable. On the other hand, we use detection association strategy to extract short tracks in batch processing. We explicitly handle merged detections by generating additional hypotheses from them. Our evaluation on wide area aerial imagery sequences shows significant improvement over state-of-the-art methods. | Xiao al @cite_2 propose to use appearance and shape templates to handle missing detections. To avoid drifting, they use road network information and consider pairwise spatial relation in optimization. However, road network information is not always available and considering spatial relations in Hungarian optimization is costly. Basharat al @cite_4 apply an appearance-based tracker whenever detection association fails for a track or the motion of a target is slow. More recently, a hybrid approach that combines DBT with a regression-based tracker is proposed @cite_46 to handle stop-then-go vehicles. Nevertheless, relying only on weak appearance information makes these trackers prone to drift and limits its ability in recovering missing detections. | {
"cite_N": [
"@cite_46",
"@cite_4",
"@cite_2"
],
"mid": [
"2004506571",
"2010593049",
""
],
"abstract": [
"Persistent surveillance of large geographic areas from unmanned aerial vehicles allows us to learn much about the daily activities in the region of interest. Nearly all of the approaches addressing tracking in this imagery are detection-based and rely on background subtraction or frame differencing to provide detections. This, however, makes it difficult to track targets once they slow down or stop, which is not acceptable for persistent tracking, our goal. We present a multiple target tracking approach that does not exclusively rely on background subtraction and is better able to track targets through stops. It accomplishes this by effectively running two trackers in parallel: one based on detections from background subtraction providing target initialization and reacquisition, and one based on a target state regressor providing frame to frame tracking. We evaluated the proposed approach on a long sequence from a wide area aerial imagery dataset, and the results show improved object detection rates and ID-switch rates with limited increases in false alarms compared to the competition.",
"We present a real-time, full-frame, multi-target Wide Area Motion Imagery (WAMI) tracking system that utilizes distributed processing to handle high data rates while maintaining high track quality. The proposed architecture processes the WAMI data as a series of geospatial tiles and implements both process- and thread-level parallelism across multiple compute nodes. Each tile is processed independently, from decoding the image through generating tracks that are finally merged across all tiles by an inter-tile linker (ITL) module. A high performance PostgreSQL database with GIS extensions is used to control the flow of intermediate data between each tracking process. High quality tracks are produced efficiently due to robust, effective algorithmic modules including: multi-frame moving object detection and track initialization; tracking based on the fusion of motion and appearance with a goal of very pure tracks; and online track linking based on multiple features. In addition, we have configured a high-performance compute cluster using high density blade servers, Infiniband networking, and an HPC filesystem. The compute cluster enables full-frame, state-of-the-art tracking of vehicles or dismounts at the WAMI sensor's native 1.25Hz frame-rate, while only taking 7u of rack space and providing 210 megapixels second throughput.",
""
]
} |
1603.08754 | 2951560603 | Scaling up visual category recognition to large numbers of classes remains challenging. A promising research direction is zero-shot learning, which does not require any training data to recognize new classes, but rather relies on some form of auxiliary information describing the new classes. Ultimately, this may allow to use textbook knowledge that humans employ to learn about new classes by transferring knowledge from classes they know well. The most successful zero-shot learning approaches currently require a particular type of auxiliary information -- namely attribute annotations performed by humans -- that is not readily available for most classes. Our goal is to circumvent this bottleneck by substituting such annotations by extracting multiple pieces of information from multiple unstructured text sources readily available on the web. To compensate for the weaker form of auxiliary information, we incorporate stronger supervision in the form of semantic part annotations on the classes from which we transfer knowledge. We achieve our goal by a joint embedding framework that maps multiple text parts as well as multiple semantic parts into a common space. Our results consistently and significantly improve on the state-of-the-art in zero-short recognition and retrieval. | Similar embedding principles, often combined with recurrent neural networks @cite_42 or a dependency parser @cite_52 , have recently been applied to image-to-text retrieval @cite_21 @cite_31 @cite_3 , language generation @cite_28 @cite_24 @cite_0 , and question answering about images @cite_5 @cite_14 @cite_13 . Our work follows the latest research in joint modeling of language and vision features by formulating an embedding of visual and textual representations in a joint space. In contrast to prior work, our approach accommodates and effectively integrates a wide range of textual representations and uses strong supervision in the form of semantic parts that remain optional at test time. In other words, we combine the advantages of two frameworks, joint image-text embeddings for zero-shot learning @cite_33 @cite_26 and sentence generation through pairwise similarity between visual and textual fragments @cite_31 @cite_24 , within a unified framework. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_42",
"@cite_21",
"@cite_52",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_5",
"@cite_31",
"@cite_13"
],
"mid": [
"1488163396",
"",
"",
"2951912364",
"",
"1811254738",
"1508977358",
"1931795219",
"2951805548",
"2951183276",
"2952246170",
"2953276893",
"2396147015"
],
"abstract": [
"In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: this http URL",
"",
"",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"",
"In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .",
"This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.",
"In the traditional object recognition pipeline, descriptors are densely sampled over an image, pooled into a high dimensional non-linear representation and then passed to a classifier. In recent years, Fisher Vectors have proven empirically to be the leading representation for a large variety of applications. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). Motivated by the assumption that different distributions should be applied for different datasets, we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. An interesting property of the Expectation-Maximization algorithm for the latter is that in the maximization step, each dimension in each component is chosen to be either a Gaussian or a Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
""
]
} |
1603.07924 | 2302673769 | The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixedcontent XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation. Keywords-XML, grammatical inference, visibly pushdown automata, stream validation, anomaly detection, experimental evaluation. | This work focuses on XML stream validation because of large documents and open-ended streams (e.g., XMPP). Stream validation has been introduced by Segoufin and Vianu @cite_16 using finite-state machines and pushdown automata. @cite_8 consider document event streams as visibly pushdown languages (VPLs), a class of deterministic context-free languages, and the authors propose XVPAs as a better representation. XVPAs have therefore been extended with datatypes for text contents. | {
"cite_N": [
"@cite_16",
"@cite_8"
],
"mid": [
"2059078591",
"2099205674"
],
"abstract": [
"This paper investigates the on-line validation of streaming XML documents with respect to a DTD, under memory constraints. We first consider validation using constant memory, formalized by a finite-state automaton ( FSA ). We examine two flavors of the problem, depending on whether or not the XML document is assumed to be well-formed. The main results of the paper provide conditions on the DTDs under which validation of either flavor can be done using an FSA . For DTDs that cannot be validated by an FSA , we investigate two alternatives. The first relaxes the constant memory requirement by allowing a stack bounded in the depth of the XML document, while maintaining the deterministic, one-pass requirement. The second approach consists in refining the DTD to provide additional information that allows validation by an FSA .",
"We propose the study of visibly pushdown automata (VPA) for processing XML documents. VPAs are pushdown automata where the input determines the stack operation, and XML documents are naturally visibly pushdown with the VPA pushing onto the stack on open-tags and popping the stack on close-tags. In this paper we demonstrate the power and ease visibly pushdown automata give in the design of streaming algorithms for XML documents. We study the problems of type-checking streaming XML documents against SDTD schemas, and the problem of typing tags in a streaming XML document according to an SDTD schema. For the latter problem, we consider both pre-order typing and post-order typing of a document, which dynamically determines types at open-tags and close-tags respectively as soon as they are met. We also generalize the problems of pre-order and post-order typing to prefix querying. We show that a deterministic VPA yields an algorithm to the problem of answering in one pass the set of all answers to any query that has the property that a node satisfying the query is determined solely by the prefix leading to the node. All the streaming algorithms we develop in this paper are based on the construction of deterministic VPAs, and hence, for any fixed problem, the algorithms process each element of the input in constant time, and use space (d), where d is the depth of the document."
]
} |
1603.07924 | 2302673769 | The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixedcontent XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation. Keywords-XML, grammatical inference, visibly pushdown automata, stream validation, anomaly detection, experimental evaluation. | Schema inference from a set of documents focuses on finding simple regular expressions for schema productions. Beyond the expressiveness of DTD, Chidlovskii @cite_20 and Ml ' y nkov ' a and Ne c ask ' y @cite_36 propose grammar-based approaches, where infoset tree nodes turned into productions. These productions are then generalized by determinism constraints @cite_20 and heuristics @cite_36 . @cite_19 propose schema inference in terms of tree automata, where up to @math ancestor elements in a document characterize a type. This work has motivated the use of locality as a generalization strategy. Lexical subsumption for datatype inference was fist mentioned by Chidlovskii @cite_20 and @cite_2 ; however, not all XSD datatypes have been considered. The proposed approach considers a datatype choice instead of a single datatype, all distinguishable XSD datatypes are used, and a preference heuristic refines a choice. | {
"cite_N": [
"@cite_36",
"@cite_19",
"@cite_20",
"@cite_2"
],
"mid": [
"1940611973",
"2123229843",
"76168887",
"2154475850"
],
"abstract": [
"In this paper we focus on a specific class of XML schema inference approaches so-called heuristic approaches. Contrary to grammar-inferring approaches, their result does not belong to any specific class of grammars and, hence, we cannot say anything about their features from the point of view of theory of languages. However, the heuristic approaches still form a wider and more popular set of approaches due to natural and user-friendly strategies. We describe a general framework of the inference algorithms and we show how its particular phases can be further enhanced and optimized to get more reasonable and realistic output. The aim of the paper is (1) to provide a general overview of the heuristic inference process and existing approaches, (2) to sum up the improvements and optimizations we have proposed so far in our research group, and (3) to discuss possible extensions and open problems which need to be solved. Hence, it enables the reader to get acquainted with the field fast.",
"Although the presence of a schema enables many optimizations for operations on XML documents, recent studies have shown that many XML documents in practice either do not refer to a schema, or refer to a syntactically incorrect one. It is therefore of utmost importance to provide tools and techniques that can automatically generate schemas from sets of sample documents. While previous work in this area has mostly focused on the inference of Document Type Definitions (DTDs for short), we will consider the inference of XML Schema Definitions (XSDs for short) --- the increasingly popular schema formalism that is turning DTDs obsolete. In contrast to DTDs where the content model of an element depends only on the element's name, the content model in an XSD can also depend on the context in which the element is used. Hence, while the inference of DTDs basically reduces to the inference of regular expressions from sets of sample strings, the inference of XSDs also entails identifying from a corpus of sample documents the contexts in which elements bear different content models. Since a seminal result by Gold implies that no inference algorithm can learn the complete class of XSDs from positive examples only, we focus on a class of XSDs that captures most XSDs occurring in practice. For this class, we provide a theoretically complete algorithm that always infers the correct XSD when a sufficiently large corpus of XML documents is available. In addition, we present a variant of this algorithm that works well on real-world (and therefore incomplete) data sets.",
"",
"XML is the de facto standard format for data exchange on the Web. While it is fairly simple to generate XML data, it is a complex task to design a schema and then guarantee that the generated data is valid according to that schema. As a consequence much XML data does not have a schema or is not accompanied by its schema. In order to gain the benefits of having a schema - efficient querying and storage of XML data, semantic verification, data integration, etc.- this schema must be extracted. In this paper we present an automatic technique, XStruct, for XML Schema extraction. Based on ideas of [5], XStruct extracts a schema for XML data by applying several heuristics to deduce regular expressions that are 1-unambiguous and describe each elements contents correctly but generalized to a reasonable degree. Our approach features several advantages over known techniques: XStruct scales to very large documents (beyond 1GB) both in time and memory consumption; it is able to extract a general, complete, correct, minimal, and understandable schema for multiple documents; it detects datatypes and attributes. Experiments confirm these features and properties."
]
} |
1603.08023 | 2328886022 | We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems. | We focus on metrics that are , i.e. @ where the model generating the response does not also evaluate its quality; thus, we do not consider word perplexity, although it has been used to evaluate unsupervised dialogue models @cite_21 . This is because it is not computed on a per-response basis, and cannot be computed for retrieval models. Further, we only consider metrics that can be used to evaluate proposed responses against ground-truth responses, so we do not consider retrieval-based metrics such as recall, which has been used to evaluate dialogue models @cite_26 @cite_5 . We also do not consider evaluation methods for supervised evaluation methods. Evaluation methods in the supervised setting have been well studied, see @cite_27 @cite_24 @cite_39 . | {
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_5"
],
"mid": [
"2603612888",
"",
"2111151330",
"119047706",
"2950483141",
"836999996"
],
"abstract": [
"",
"",
"Abstract Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogu...",
"Proper usability evaluations of spoken dialogue systems are costly and cumbersome to carry out. In this paper, we present a new approach for facilitating usability evaluations which is based on user error simulations. The idea is to replace real users with simulations derived from empirical observations of users’ erroneous behavior. The simulated errors must cover both system-driven errors (e.g., due to poor speech recognition) as well as conceptual errors and slips of the user, because neither alone is predictive of perceived usability. The simulation is integrated into a workbench which produces reports of typical and rare errors, and which allows usability ratings to be predicted. If successful, this workbench will help designers in making choices between system versions and lower testing costs at early phases of development. Challenges to the approach are discussed and solutions proposed.",
"This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.",
"This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response."
]
} |
1603.08023 | 2328886022 | We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems. | Several recent works on unsupervised dialogue systems adopt the BLEU score for evaluation. formulate the unsupervised learning problem as one of translating a context into a candidate response. They use a statistical machine translation (SMT) model to generate responses to various contexts using Twitter data, and show that it outperforms information retrieval baselines according to both BLEU and human evaluations. @ extend this idea using a recurrent language model to generate responses in a context-sensitive manner. They also evaluate using BLEU, however they produce multiple ground truth responses by retrieving 15 responses from elsewhere in the corpus, using a simple bag-of-words model. @ evaluate their proposed diversity-promoting objective function for neural network models using BLEU score with only a single ground truth response. A modified version of BLEU, deltaBLEU @cite_35 , which takes into account several human-evaluated ground truth responses, is shown to have a weak to moderate correlation to human judgements using Twitter dialogues. However, such human annotation is often infeasible to obtain in practice. @ also show that, even with several ground truth responses available, the standard BLEU metric does not correlate strongly with human judgements. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2951813108"
],
"abstract": [
"We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [-1, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, deltaBLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman's rho and Kendall's tau."
]
} |
1603.08023 | 2328886022 | We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems. | There has been significant previous work that evaluates how well automatic metrics correlate with human judgements in in both machine translation @cite_19 @cite_40 @cite_2 @cite_30 and natural language generation (NLG) @cite_10 @cite_23 @cite_33 @cite_6 . There has also been work criticizing the usefulness of BLEU in particular for machine translation @cite_13 . While many of the criticisms in these works apply to dialogue generation, we note that generating dialogue responses conditioned on the conversational context is in fact a more difficult problem. This is because most of the difficulty in automatically evaluating language generation models lies in the large set of correct answers. Dialogue response generation given solely the context intuitively has a higher diversity (or entropy) than translation given text in a source language, or surface realization given some intermediate form @cite_37 . | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_37",
"@cite_33",
"@cite_6",
"@cite_19",
"@cite_40",
"@cite_23",
"@cite_2",
"@cite_10"
],
"mid": [
"2294699749",
"1489525520",
"2102672872",
"1982897610",
"",
"2159107349",
"2065596371",
"2123891489",
"2257408573",
"1552182777"
],
"abstract": [
"Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics — METEOR, NLEPOR and SENTBLEUMOSES — are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-toEnglish and the aggregated set of 9 language pairs.",
"We argue that the machine translation community is overly reliant on the Bleu machine translation evaluation metric. We show that an improved Bleu score is neither necessary nor sufficient for achieving an actual improvement in translation quality, and give two significant counterexamples to Bleu’s correlation with human judgments of quality. This offers new potential for research which was previously deemed unpromising by an inability to improve upon Bleu scores.",
"Conversational dialogue systems cannot be evaluated in a fully formal manner, because dialogue is heavily dependent on context and current dialogue theory is not precise enough to specify a target output ahead of time. Instead, we evaluate dialogue systems in a semi-formal manner, using human judges to rate the coherence of a conversational character and correlating these judgments with measures extracted from within the system. We present a series of three evaluations of a single conversational character over the course of a year, demonstrating how this kind of evaluation helps bring about an improvement in overall dialogue coherence.",
"There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.",
"",
"This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk.",
"",
"We examine correlations between native speaker judgements on automatically generated German text against automatic evaluation metrics. We look at a number of metrics from the MT and Summarisation communities and find that for a relative ranking task, most automatic metrics perform equally well and have fairly strong correlations to the human judgements. In contrast, on a naturalness judgement task, the General Text Matcher (GTM) tool correlates best overall, although in general, correlation between the human judgements and the automatic metrics was quite weak.",
"This paper presents the results of the WMT14 shared tasks, which included a standard news translation task, a separate medical translation task, a task for run-time estimation of machine translation quality, and a metrics task. This year, 143 machine translation systems from 23 institutions were submitted to the ten translation directions in the standard translation task. An additional 6 anonymized systems were included, and were then evaluated both automatically and manually. The quality estimation task had four subtasks, with a total of 10 teams, submitting 57 entries",
"Recent years have seen increasing interest in automatic metrics for the evaluation of generation systems. When a system can generate syntactic variation, automatic evaluation becomes more difficult. In this paper, we compare the performance of several automatic evaluation metrics using a corpus of automatically generated paraphrases. We show that these evaluation metrics can at least partially measure adequacy (similarity in meaning), but are not good measures of fluency (syntactic correctness). We make several proposals for improving the evaluation of generation systems that produce variation."
]
} |
1603.08191 | 2326292785 | According to physics predictions, the free energy of random factor graph models that satisfy a certain "static replica symmetry" condition can be calculated via the Belief Propagation message passing scheme [, PNAS 2007]. Here we prove this conjecture for two general classes of random factor graph models, namely Poisson random factor graphs and random regular factor graphs. Specifically, we show that the messages constructed just as in the case of acyclic factor graphs asymptotically satisfy the Belief Propagation equations and that the free energy density is given by the Bethe free energy formula. | Furthermore, inspired by the theory of graph limits @cite_27 , Coja-Oghlan, Perkins and Skubch @cite_6 put forward a limiting theory'' for discrete probability measures to go with the regularity concept from @cite_23 . They applied this concept to the Poisson factor graph model from under the assumption that ) holds and that the Gibbs measure converges in probability to a limiting measure (in the topology constructed in @cite_6 ). While these assumptions are stronger and more complicated to state than ), @cite_6 shows that the limiting Gibbs measure induces a geometric'' Gibbs measure on a certain infinite random tree. Moreover, this geometric measure satisfies a certain fixed point relation reminiscent of the Belief Propagation equations. | {
"cite_N": [
"@cite_27",
"@cite_23",
"@cite_6"
],
"mid": [
"",
"904137795",
"2266756552"
],
"abstract": [
"",
"A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k-SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [, PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 2016",
"Abstract Building upon the theory of graph limits and the Aldous–Hoover representation and inspired by Panchenko’s work on asymptotic Gibbs measures [Annals of Probability 2013], we construct continuous embeddings of discrete probability distributions. We show that the theory of graph limits induces a meaningful notion of convergence and derive a corresponding version of the Szemeredi regularity lemma. Moreover, complementing recent work (2015), we apply these results to Gibbs measures induced by sparse random factor graphs and verify the “replica symmetric solution” predicted in the physics literature under the assumption of non-reconstruction."
]
} |
1603.08152 | 2951747365 | Data seems cheap to get, and in many ways it is, but the process of creating a high quality labeled dataset from a mass of data is time-consuming and expensive. With the advent of rich 3D repositories, photo-realistic rendering systems offer the opportunity to provide nearly limitless data. Yet, their primary value for visual learning may be the quality of the data they can provide rather than the quantity. Rendering engines offer the promise of perfect labels in addition to the data: what the precise camera pose is; what the precise lighting location, temperature, and distribution is; what the geometry of the object is. In this work we focus on semi-automating dataset creation through use of synthetic data and apply this method to an important task -- object viewpoint estimation. Using state-of-the-art rendering software we generate a large labeled dataset of cars rendered densely in viewpoint space. We investigate the effect of rendering parameters on estimation performance and show realism is important. We show that generalizing from synthetic data is not harder than the domain adaptation required between two real-image datasets and that combining synthetic images with a small amount of real data improves estimation accuracy. | Online labor markets such as Amazon's Mechanical Turk have been used in the computer vision community to crowdsource simple tasks such as image level labeling, or bounding box annotation @cite_24 @cite_14 @cite_7 . However, labor markets often lack expert knowledge, making some classes of tasks impossible to complete. Experts are rare in a population and when are not properly identified, their answers will be ignored when not consistent with other workers -- exactly on the instances where their knowledge is most crucial @cite_23 . | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_23",
"@cite_7"
],
"mid": [
"2952020226",
"",
"2132962756",
"2141282920"
],
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.",
"",
"Online labor markets, such as Amazon's Mechanical Turk, have been used to crowdsource simple, short tasks like image labeling and transcription. However, expert knowledge is often lacking in such markets, making it impossible to complete certain classes of tasks. In this work we introduce an alternative mechanism for crowdsourcing tasks that require specialized knowledge or skill: communitysourcing --- the use of physical kiosks to elicit work from specific populations. We investigate the potential of communitysourcing by designing, implementing and evaluating Umati: the communitysourcing vending machine. Umati allows users to earn credits by performing tasks using a touchscreen attached to the machine. Physical rewards (in this case, snacks) are dispensed through traditional vending mechanics. We evaluated whether communitysourcing can accomplish expert work by using Umati to grade Computer Science exams. We placed Umati in a university Computer Science building, targeting students with grading tasks for snacks. Over one week, 328 unique users (302 of whom were students) completed 7771 tasks (7240 by students). 80 of users had never participated in a crowdsourcing market before. We found that Umati was able to grade exams with 2 higher accuracy (at the same price) or at 33 lower cost (at equivalent accuracy) than traditional single-expert grading. Mechanical Turk workers had no success grading the same exams. These results indicate that communitysourcing can successfully elicit high-quality expert work from specific communities.",
"We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained."
]
} |
1603.08067 | 2952250128 | Physical fluents, a term originally used by Newton [40], refers to time-varying object states in dynamic scenes. In this paper, we are interested in inferring the fluents of vehicles from video. For example, a door (hood, trunk) is open or closed through various actions, light is blinking to turn. Recognizing these fluents has broad applications, yet have received scant attention in the computer vision literature. Car fluent recognition entails a unified framework for car detection, car part localization and part status recognition, which is made difficult by large structural and appearance variations, low resolutions and occlusions. This paper learns a spatial-temporal And-Or hierarchical model to represent car fluents. The learning of this model is formulated under the latent structural SVM framework. Since there are no publicly related dataset, we collect and annotate a car fluent dataset consisting of car videos with diverse fluents. In experiments, the proposed method outperforms several highly related baseline methods in terms of car fluent recognition and car part localization. | In computer vision and intelligent transportation, there is a considerable body of work on car detection and view estimation @cite_5 @cite_24 @cite_43 @cite_58 @cite_50 @cite_16 @cite_2 @cite_15 @cite_4 @cite_52 . Though those works have successfully improve the performance on popular benchmarks @cite_41 @cite_70 , the output is usually the car bounding box and a quantized rough view. Some other works @cite_33 @cite_45 @cite_47 aim to get car configurations with detailed or fine-grained output to describe the more meaningful car shape, rather than a bounding box. However, all of those works generally regarded car as a static rigid object, while pay little attention to the functionalities of semantic car parts and the fact that cars presenting large geometry and appearance transformations during car fluents change. | {
"cite_N": [
"@cite_47",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_70",
"@cite_52",
"@cite_16",
"@cite_24",
"@cite_43",
"@cite_45",
"@cite_50",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_58"
],
"mid": [
"",
"2158895499",
"2045750950",
"2031489346",
"2150066425",
"2107777165",
"2952186964",
"2153185908",
"1964201035",
"2145283077",
"1946609740",
"2280913812",
"2168356304",
"344254576",
"2085805060"
],
"abstract": [
"",
"In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.",
"Fine-grained categorization of object classes is receiving increased attention, since it promises to automate classification tasks that are difficult even for humans, such as the distinction between different animal species. In this paper, we consider fine-grained categorization for a different reason: following the intuition that fine-grained categories encode metric information, we aim to generate metric constraints from fine-grained category predictions, for the benefit of 3D scene-understanding. To that end, we propose two novel methods for fine-grained classification, both based on part information, as well as a new fine-grained category data set of car types. We demonstrate superior performance of our methods to state-of-the-art classifiers, and show first promising results for estimating the depth of objects from fine-grained category predictions from a monocular camera.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti",
"To monitor road situation, the source from CCTV is more useful than any other data from GPS or loop detector because it can give the whole picture of the two-dimensional traffic situation. This paper suggests multiple vehicles detection by quad-tree segmentation and tracking method using scale invariant feature transform to improve the performance of tracking for extracting traffic parameter such as vehicle count, speed, class, and so on. The experimental result presents the proposed method is effective and robust on detection and tracking vehicle, especially in cases that a vehicle changes a lane, occlusion of vehicles is occurred, and an affine shape of vehicle is changed due to car movement.",
"This paper studies efficient means for dealing with intra-category diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.",
"Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.",
"Despite the success of current state-of-the-art object class detectors, severe occlusion remains a major challenge. This is particularly true for more geometrically expressive 3D object class representations. While these representations have attracted renewed interest for precise object pose estimation, the focus has mostly been on rather clean datasets, where occlusion is not an issue. In this paper, we tackle the challenge of modeling occlusion in the context of a 3D geometric object class model that is capable of fine-grained, part-level 3D object reconstruction. Following the intuition that 3D modeling should facilitate occlusion reasoning, we design an explicit representation of likely geometric occlusion patterns. Robustness is achieved by pooling image evidence from of a set of fixed part detectors as well as a non-parametric representation of part configurations in the spirit of pose lets. We confirm the potential of our method on cars in a newly collected data set of inner-city street scenes with varying levels of occlusion, and demonstrate superior performance in occlusion estimation and part localization, compared to baselines that are unaware of occlusions.",
"Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.",
"This paper proposes a multiple-object detection and tracking method that explicitly handles dynamic occlusions. A context-based multiple-cue detector is proposed to detect occluded vehicles (occludees). First, we detect and track fully-visible vehicles (occluders). Occludee detection adopts those occluders as priors. Two classifiers for partially-visible vehicles are trained to use appearance cues. Disparity is adopted to further constrain the occludee locations. A detected occludee is then tracked by a Kalman-based tracking-by-detection method. As dynamic occlusions lead to role changes for occluder or occludee, an integrative module is introduced for possibly switching occludee and occluder trackers. The proposed system was tested on overtaking scenarios. It improved an occluder-only tracking system by over 10 regarding the frame-based detection rate, and by over 20 regarding the trajectory detection rate. The occludees are detected and tracked in the proposed method up to 7 seconds before they are picked up by occluder-only method.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"Object detection and pose estimation are interdependent problems in computer vision. Many past works decouple these problems, either by discretizing the continuous pose and training pose-specific object detectors, or by building pose estimators on top of detector outputs. In this paper, we propose a structured kernel machine approach to treat object detection and pose estimation jointly in a mutually benificial way. In our formulation, a unified, continuously parameterized, discriminative appearance model is learned over the entire pose space. We propose a cascaded discrete-continuous algorithm for efficient inference, and give effective online constraint generation strategies for learning our model using structural SVMs. On three standard benchmarks, our method performs better than, or on par with, state-of-the-art methods in the combined task of object detection and pose estimation.",
"Despite the success of recent object class recognition systems, the long-standing problem of partial occlusion remains a major challenge, and a principled solution is yet to be found. In this paper we leave the beaten path of methods that treat occlusion as just another source of noise - instead, we include the occluder itself into the modelling, by mining distinctive, reoccurring occlusion patterns from annotated training data. These patterns are then used as training data for dedicated detectors of varying sophistication. In particular, we evaluate and compare models that range from standard object class detectors to hierarchical, part-based representations of occluder occludee pairs. In an extensive evaluation we derive insights that can aid further developments in tackling the occlusion challenge."
]
} |
1603.08067 | 2952250128 | Physical fluents, a term originally used by Newton [40], refers to time-varying object states in dynamic scenes. In this paper, we are interested in inferring the fluents of vehicles from video. For example, a door (hood, trunk) is open or closed through various actions, light is blinking to turn. Recognizing these fluents has broad applications, yet have received scant attention in the computer vision literature. Car fluent recognition entails a unified framework for car detection, car part localization and part status recognition, which is made difficult by large structural and appearance variations, low resolutions and occlusions. This paper learns a spatial-temporal And-Or hierarchical model to represent car fluents. The learning of this model is formulated under the latent structural SVM framework. Since there are no publicly related dataset, we collect and annotate a car fluent dataset consisting of car videos with diverse fluents. In experiments, the proposed method outperforms several highly related baseline methods in terms of car fluent recognition and car part localization. | In computer vision, a significant effort has been devoted to video-based tasks: event @cite_46 @cite_19 @cite_11 , action detection recognition @cite_7 @cite_56 @cite_72 @cite_44 and pose estimation @cite_42 @cite_30 @cite_66 . These papers are related to our work, however, most of them are based on human models, while very little work has done on car fluents and related car part status estimation. In cognitive science, the concept of fluents is mainly used to represent the object status changing in time series @cite_6 , and it has been used in causality inference @cite_6 @cite_27 @cite_9 @cite_3 . Furthermore, fluents are also related to action-planning in AI and Robotics @cite_48 @cite_34 . | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_6",
"@cite_56",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_72",
"@cite_3",
"@cite_46",
"@cite_34",
"@cite_66",
"@cite_11"
],
"mid": [
"2131263044",
"2098339052",
"2025460523",
"204991199",
"",
"",
"",
"1912967058",
"2134266767",
"1641347315",
"2160144863",
"",
"2147625498",
"1613163714",
"2172043283",
"1950136256"
],
"abstract": [
"Non-rigid object detection and articulated pose estimation are two related and challenging problems in computer vision. Numerous models have been proposed over the years and often address different special cases, such as pedestrian detection or upper body pose estimation in TV footage. This paper shows that such specialization may not be necessary, and proposes a generic approach based on the pictorial structures framework. We show that the right selection of components for both appearance and spatial modeling is crucial for general applicability and overall performance of the model. The appearance of body parts is modeled using densely sampled shape context descriptors and discriminatively trained AdaBoost classifiers. Furthermore, we interpret the normalized margin of each classifier as likelihood in a generative model. Non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between parts. The marginal posterior of each part is inferred using belief propagation. We demonstrate that such a model is equally suitable for both detection and pose estimation tasks, outperforming the state of the art on three recently proposed datasets.",
"Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions.",
"Abstract We introduce a new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure we call a planning graph. We describe a new planner, Graphplan, that uses this paradigm. Graphplan always returns a shortest possible partial-order plan, or states that no valid plan exists. We provide empirical evidence in favor of this approach, showing that Graphplan outperforms the total-order planner, Prodigy and the partial-order planner, UCPOP, on a variety of interesting natural and artificial planning problems. We also give empirical evidence that the plans produced by Graphplan are quite sensible. Since searches made by this approach are fundamentally different from the searches of other common planning methods, they provide a new perspective on the planning problem.",
"We address the problem of visually detecting causal events and fitting them together into a coherent story of the action witnessed by the camera. We show that this can be done by reasoning about the motions and collisions of surfaces, using high-level causal constraints derived from psychological studies of infant visual behavior. These constraints are naive forms of basic physical laws governing substantiality, contiguity, momentum, and acceleration. We describe two implementations. One system parses instructional videos, extracting plans of action and key frames suitable for storyboarding. Since learning will play a role in making such systems robust, we introduce a new framework for higher-order hidden Markov models and demonstrate its use in a second system that segments stereo video into actions in near real-time. Rather than attempt accurate low-level vision, both systems use high-level causal analysis to integrate fast but sloppy pixel-based representations over time. The output is suitable for summary, indexing, and automated editing.",
"",
"",
"",
"Action recognition and pose estimation from video are closely related tasks for understanding human motion, most methods, however, learn separate models and combine them sequentially. In this paper, we propose a framework to integrate training and testing of the two tasks. A spatial-temporal And-Or graph model is introduced to represent action at three scales. Specifically the action is decomposed into poses which are further divided to mid-level ST-parts and then parts. The hierarchical structure of our model captures the geometric and appearance variations of pose at each frame and lateral connections between ST-parts at adjacent frames capture the action-specific motion information. The model parameters for three scales are learned discriminatively, and action labels and poses are efficiently inferred by dynamic programming. Experiments demonstrate that our approach achieves state-of-art accuracy in action recognition while also improving pose estimation.",
"We present a new representation and recognition method for human activities. An activity is considered to be composed of action threads, each thread being executed by a single actor. A single-thread action is represented by a stochastic finite automaton of event states, which are recognized from the characteristics of the trajectory and shape of moving blob of the actor using Bayesian methods. A multi-agent event is composed of several action threads related by temporal constraints. Multi-agent events are recognized by propagating the constraints and likelihood of event threads in a temporal logic network. We present results on real-world data and performance characterization on perturbed data.",
"Whereas acausal Bayesian networks represent probabilistic independence, causal Bayesian networks represent causal relationships. In this paper, we examine Bayesian methods for learning both types of networks. Bayesian methods for learning acausal networks are fairly well developed. These methods often employ assumptions to facilitate the construction of priors, including the assumptions of parameter independence, parameter modularity, and likelihood equivalence. We show that although these assumptions also can be appropriate for learning causal networks, we need additional assumptions in order to learn causal networks. We introduce two sufficient assumptions, called mechanism independence and component independence. We show that these new assumptions, when combined with parameter independence, parameter modularity, and likelihood equivalence, allow us to apply methods for learning acausal networks to learn causal networks.",
"Temporal misalignment and duration variation in video actions largely influence the performance of action recognition, but it is very difficult to specify effective temporal alignment on action sequences. To address this challenge, this paper proposes a novel discriminative learning-based temporal alignment method, called maximum margin temporal warping (MMTW), to align two action sequences and measure their matching score. Based on the latent structure SVM formulation, the proposed MMTW method is able to learn a phantom action template to represent an action class for maximum discrimination against other classes. The recognition of this action class is based on the associated learned alignment of the input action. Extensive experiments on five benchmark datasets have demonstrated that this MMTW model is able to significantly promote the accuracy and robustness of action recognition under temporal misalignment and variations.",
"",
"We propose a first attempt to classify events in static images by integrating scene and object categorizations. We define an event in a static image as a human activity taking place in a specific environment. In this paper, we use a number of sport games such as snow boarding, rock climbing or badminton to demonstrate event classification. Our goal is to classify the event in the image as well as to provide a number of semantic labels to the objects and scene environment within the image. For example, given a rowing scene, our algorithm recognizes the event as rowing by classifying the environment as a lake and recognizing the critical objects in the image as athletes, rowing boat, water, etc. We achieve this integrative and holistic recognition through a generative graphical model. We have assembled a highly challenging database of 8 widely varied sport events. We show that our system is capable of classifying these event classes at 73.4 accuracy. While each component of the model contributes to the final recognition, using scene or objects alone cannot achieve this performance.",
"In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by \"watching\" unconstrained videos with high accuracy.",
"We describe a method for retrieving shots containing a particular 2D human pose from unconstrained movie and TV videos. The method involves first localizing the spatial layout of the head, torso and limbs in individual frames using pictorial structures, and associating these through a shot by tracking. A feature vector describing the pose is then constructed from the pictorial structure. Shots can be retrieved either by querying on a single frame with the desired pose, or through a pose classifier trained from a set of pose examples. Our main contribution is an effective system for retrieving people based on their pose, and in particular we propose and investigate several pose descriptors which are person, clothing, background and lighting independent. As a second contribution, we improve the performance over existing methods for localizing upper body layout on unconstrained video. We compare the spatial layout pose retrieval to a baseline method where poses are retrieved using a HOG descriptor. Performance is assessed on five episodes of the TV series 'Buffy the Vampire Slayer', and pose retrieval is demonstrated also on three Hollywood movies..",
"In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset."
]
} |
1603.08016 | 2317109765 | This paper presents a comparison of classification methods for linguistic typology for the purpose of expanding an extensive, but sparse language resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). We experimented with a variety of regression and nearest-neighbor methods for use in classification over a set of 325 languages and six syntactic rules drawn from WALS. To classify each rule, we consider the typological features of the other five rules; linguistic features extracted from a word-aligned Bible in each language; and genealogical features (genus and family) of each language. In general, we find that propagating the majority label among all languages of the same genus achieves the best accuracy in label pre- diction. Following this, a logistic regression model that combines typological and linguistic features offers the next best performance. Interestingly, this model actually outperforms the majority labels among all languages of the same family. | Typological similarity has previously been shown to correlate with genealogical similarity both in the fields of NLP @cite_4 and historical linguistics @cite_13 . Due to this, WALS has been used to study linguistic typology via computational methods. In order to determine language similarity via genealogical relatedness, it is important to know which rules are more telling of historical relatedness. To do this, @cite_0 measured the variance of linguistic rules within language families, at the genus level, to determine their stability. Rules that change less often within language families, then, are more indicative of historical relationships when they are shared between languages. To do this, the authors extracted data for language families from WALS and calculated the probability of features being shared within and across language families. Feature variation was shown to be significantly different across genera. This indicates that language families are more similar internally than across genera, which in turn supports the concept of using typological similarity to predict language similarity. | {
"cite_N": [
"@cite_13",
"@cite_0",
"@cite_4"
],
"mid": [
"2066464964",
"1982597597",
"2251639344"
],
"abstract": [
"The contribution of language history to the study of the early dispersals of modern humans throughout the Old World has been limited by the shallow time depth (about 8000 ± 2000 years) of current linguistic methods. Here it is shown that the application of biological cladistic methods, not to vocabulary (as has been previously tried) but to language structure (sound systems and grammar), may extend the time depths at which language data can be used. The method was tested against well-understood families of Oceanic Austronesian languages, then applied to the Papuan languages of Island Melanesia, a group of hitherto unrelatable isolates. Papuan languages show an archipelago-based phylogenetic signal that is consistent with the current geographical distribution of languages. The most plausible hypothesis to explain this result is the divergence of the Papuan languages from a common ancestral stock, as part of late Pleistocene dispersals.",
"In this paper we propose a method to measure the stability of typological features given a data set such as (2005). We determine the frequency with which one and the same value of a feature occurs in shallow genealogical groupings (‘genus’ level). If the languages within such a group tend to show the same value, there is reason to believe that the feature is inherently more stable than if much variation is found within such groups. The proposed metric may be used to evaluate the utility of linguistic features when employing typological data for inferring genealogical relationships among languages.",
"The recent availability of typological databases such as World Atlas of Language Structures (WALS) has spurred investigations regarding its utility for classifying the world’s languages, the stability of typological features in genetic linguistics and typological universals across the language families of the world. In this paper, we compare typological distances, derived from fifteen vector similarity measures, with family internal classifications and also lexical divergence. These results are only a first step towards the use of WALS database in the projection of NLP resources and bootstrapping NLP tools for typologically or genetically similar, yet resource-poor languages."
]
} |
1603.08016 | 2317109765 | This paper presents a comparison of classification methods for linguistic typology for the purpose of expanding an extensive, but sparse language resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). We experimented with a variety of regression and nearest-neighbor methods for use in classification over a set of 325 languages and six syntactic rules drawn from WALS. To classify each rule, we consider the typological features of the other five rules; linguistic features extracted from a word-aligned Bible in each language; and genealogical features (genus and family) of each language. In general, we find that propagating the majority label among all languages of the same genus achieves the best accuracy in label pre- diction. Following this, a logistic regression model that combines typological and linguistic features offers the next best performance. Interestingly, this model actually outperforms the majority labels among all languages of the same family. | These syntactic regularities are not only predictable within language genera, however. @cite_7 used WALS as a database to discover implicative associations, similar to Greenberg's universals. Not only did they recover many of Greenberg's universals, but they also uncovered a host of other implicatures. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2950822547"
],
"abstract": [
"A standard form of analysis for linguistic typology is the universal implication. These implications state facts about the range of extant languages, such as if objects come after verbs, then adjectives come after nouns.'' Such implications are typically discovered by painstaking hand analysis over a small sample of languages. We propose a computational model for assisting at this process. Our model is able to discover both well-known implications as well as some novel implications that deserve further study. Moreover, through a careful application of hierarchical analysis, we are able to cope with the well-known sampling problem: languages are not independent."
]
} |
1603.08016 | 2317109765 | This paper presents a comparison of classification methods for linguistic typology for the purpose of expanding an extensive, but sparse language resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). We experimented with a variety of regression and nearest-neighbor methods for use in classification over a set of 325 languages and six syntactic rules drawn from WALS. To classify each rule, we consider the typological features of the other five rules; linguistic features extracted from a word-aligned Bible in each language; and genealogical features (genus and family) of each language. In general, we find that propagating the majority label among all languages of the same genus achieves the best accuracy in label pre- diction. Following this, a logistic regression model that combines typological and linguistic features offers the next best performance. Interestingly, this model actually outperforms the majority labels among all languages of the same family. | Apart from typology, the data from WALS have also been used in natural language processing to advance several core NLP tasks. @cite_15 used language similarity information to improve multilingual parsing by defined a distance metric over WALS rules. The authors considered six WALS rules to create their similarity metric, three of which (85A, 86A, and 88A), are also used in this study. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2115774663"
],
"abstract": [
"We present a novel algorithm for multilingual dependency parsing that uses annotations from a diverse set of source languages to parse a new unannotated language. Our motivation is to broaden the advantages of multilingual learning to languages that exhibit significant differences from existing resource-rich languages. The algorithm learns which aspects of the source languages are relevant for the target language and ties model parameters accordingly. The model factorizes the process of generating a dependency tree into two steps: selection of syntactic dependents and their ordering. Being largely language-universal, the selection component is learned in a supervised fashion from all the training languages. In contrast, the ordering decisions are only influenced by languages with similar properties. We systematically model this cross-lingual sharing using typological features. In our experiments, the model consistently outperforms a state-of-the-art multi-lingual parser. The largest improvement is achieved on the non Indo-European languages yielding a gain of 14.4 ."
]
} |
1603.08016 | 2317109765 | This paper presents a comparison of classification methods for linguistic typology for the purpose of expanding an extensive, but sparse language resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). We experimented with a variety of regression and nearest-neighbor methods for use in classification over a set of 325 languages and six syntactic rules drawn from WALS. To classify each rule, we consider the typological features of the other five rules; linguistic features extracted from a word-aligned Bible in each language; and genealogical features (genus and family) of each language. In general, we find that propagating the majority label among all languages of the same genus achieves the best accuracy in label pre- diction. Following this, a logistic regression model that combines typological and linguistic features offers the next best performance. Interestingly, this model actually outperforms the majority labels among all languages of the same family. | @cite_20 attempted a similar task, using interlinear glossed text (IGT) to predict typological features such as major consituent order and case system. @cite_3 also predict typological features from IGT by first learning a context free grammar for a language and then examining its structure. Both studies use data from WALS to evaluate their performance. We address a similar task, but we utilize projected dependency parses from English text as a source of knowledge for each foreign language, as many do not have extensive resources of their own. This is similar to IGT, but does not contain morphological information to the same extent. This method of projecting dependencies has been used to bootstrap linguistic resources in the past @cite_25 , @cite_11 . | {
"cite_N": [
"@cite_3",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"59306127",
"2125280523",
"2250528015",
"2143954309"
],
"abstract": [
"In this paper we explore the potential for identifying computationally relevant typological features from a multilingual corpus of language data built from readily available language data collected off the Web. Our work builds on previous structural projection work, where we extend the work of projection to building individual CFGs for approximately 100 languages. We then use the CFGs to discover the values of typological parameters such as word order, the presence or absence of definite and indefinite determiners, etc. Our methods have the potential of being extended to many more languages and parameters, and can have significant effects on current research focused on tool and resource development for low-density languages and grammar induction from raw corpora.",
"This paper explores the potential for annotating and enriching data for low-density languages via the alignment and projection of syntactic structure from parsed data for resource-rich languages such as English. We seek to develop enriched resources for a large number of the world’s languages, most of which have no significant digital presence. We do this by tapping the body of Web-based linguistic data, most of which exists in small, analyzed chunks embedded in scholarly papers, journal articles, Web pages, and other online documents. By harvesting and enriching these data, we can provide the means for knowledge discovery across the resulting corpus that can lead to building computational resources such as grammars and transfer rules, which, in turn, can be used as bootstraps for building additional tools and resources for the languages represented. 1",
"We propose to bring together two kinds of linguistic resources—interlinear glossed text (IGT) and a language-independent precision grammar resource—to automatically create precision grammars in the context of language documentation. This paper takes the first steps in that direction by extracting major-constituent word order and case system properties from IGT for a diverse sample of languages.",
"Broad coverage, high quality parsers are available for only a handful of languages. A prerequisite for developing broad coverage parsers for more languages is the annotation of text with the desired linguistic representations (also known as “treebanking”). However, syntactic annotation is a labor intensive and time-consuming process, and it is difficult to find linguistically annotated text in sufficient quantities. In this article, we explore using parallel text to help solving the problem of creating syntactic annotation in more languages. The central idea is to annotate the English side of a parallel corpus, project the analysis to the second language, and then train a stochastic analyzer on the resulting noisy annotations. We discuss our background assumptions, describe an initial study on the “projectability” of syntactic relations, and then present two experiments in which stochastic parsers are developed with minimal human intervention via projection from English."
]
} |
1603.08174 | 2334009191 | In this paper, we analyze the growth patterns of Donald Trump's followers (Trumpists, henceforth) on Twitter. We first construct a random walk model with a time trend to study the growth trend and the effects of public debates. We then analyze the relationship between Trump's activity on Twitter and the growth of his followers. Thirdly, we analyze the effects of such controversial events as calling for Muslim ban and his 'schlonged' remark. | There is a burgeoning literature in computer science on using social media data to analyze and predict elections. Research by @cite_4 finds that the number of messages mentioning a party reflects the election results. According to @cite_5 , the number of Facebook fans constitutes an indicator of candidate viability. @cite_6 use user profile images to study and compare the social demographics of Trump followers and Clinton followers. Similarly, our work is also motivated by the high parallel between performance in the polls and popularity in Twitter. @cite_3 employ LDA to model tweet topics and use negative binomial regression on the number of tweet likes' to infer topic preferences of Trump followers. | {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_4",
"@cite_6"
],
"mid": [
"",
"2296105616",
"1590495275",
"2963817807"
],
"abstract": [
"",
"In this paper, we propose a framework to infer the topic preferences of Donald Trump's followers on Twitter. We first use latent Dirichlet allocation (LDA) to derive the weighted mixture of topics for each Trump tweet. Then we use negative binomial regression to model the \"likes,\" with the weights of each topic serving as explanatory variables. Our study shows that attacking Democrats such as President Obama and former Secretary of State Hillary Clinton earns Trump the most \"likes.\" Our framework of inference is generalizable to the study of other politicians.",
"Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research.",
"In this paper, we study follower demographics of Donald Trump and Hillary Clinton, the two leading candidates in the 2016 U.S. presidential race. We build a unique dataset US2016, which includes the number of followers for each candidate from September 17, 2015 to December 22, 2015. US2016 also includes the geographical location of these followers, the number of their own followers and, very importantly, the profile image of each follower. We use individuals’ number of followers and profile images to analyze four dimensions of follower demographics: social status, gender, race and age. Our study shows that in terms of social influence, the Trumpists are more polarized than the Clintonists: they tend to have either a lot of influence or little influence. We also find that compared with the Clintonists, the Trumpists are more likely to be either very young or very old. Our study finds no gender affinity effect for Clinton in the Twitter sphere, but we do find that the Clintonists are more racially diverse."
]
} |
1603.08119 | 2317398342 | Seamless computing and service sharing in community networks have been gaining momentum due to the emerging technology of community network micro-clouds (CNMCs). However, running services in CNMCs can face enormous challenges such as the dynamic nature of micro-clouds, limited capacity of nodes and links, asymmetric quality of wireless links for services, deployment mod- els based on geographic singularities rather than network QoS, and etc. CNMCs have been increasingly used by network-intensive services that exchange significant amounts of data between the nodes on which they run, therefore the performance heavily relies on the available bandwidth resource in a network. This paper proposes a novel bandwidth-aware service placement algorithm which out- performs the current random placement adopted by Guifi.net. Our preliminary results show that the proposed algorithm consistently outperforms the current random placement adopted in Guifi.net by 35 regarding its bandwidth gain. More importantly, as the number of services increases, the gain tends to increase accordingly. | Choreo @cite_7 is a measurement-based method for placing applications in the cloud infrastructures to minimize an objective function such as application completion time. Choreo makes fast measurements of cloud networks using packet trains as well as other methods, profiles application network demands using a machine-learning algorithm, and places applications using a greedy heuristic, which in practice is much more efficient than finding an optimal solution. In @cite_15 the authors proposed an optimal allocation solution for ambient intelligence environments using tasks replication to avoid network performance degradation. Volley @cite_14 is a system that performs automatic data placement across geographically distributed datacenters of Microsoft. Volley analyzes the logs or requests using an iterative optimization algorithm based on data access patterns and client locations, and outputs migration recommendations back to the cloud service. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_7"
],
"mid": [
"2060705758",
"1502450072",
"2004618474"
],
"abstract": [
"Ambient Intelligence (AmI) is an IT concept by which mobile users shall be seamlessly supported in their everyday activities. This includes interactions with remote resources as well as with their current physical environment. We have developed the so-called Ad hoc Service Grid (ASG) infrastructure that supports the latter form of interactions. It allows operators to cover arbitrary locations with ambient services in a drop-and-deploy fashion. An ambient service may autonomously distribute (replicate and migrate) within an ASG network to optimize its availability, response times, and network usage. In this article, we propose a fully decentralized, dynamic, and adaptive service placement algorithm for AmI environments like the ASG. This algorithm achieves a coordinated global placement pattern that minimizes the communication costs without any central controller. It does not even require additional communication among the replicas. Moreover, placement patterns stabilize if no changes occur in the environment while replicas still retain their ability to adapt. Mechanisms for self-organized placement of services are very important for AmI environments in general since they allow for autonomous adaptations to dynamic changes and, thus, remove the need for manual (re)configuration of a running system. We present a detailed evaluation of the algorithm's performance and compare it with three other algorithms to show its competitiveness. Furthermore, we discuss how the desired self-organizing behavior emerges from the interactions of a few simple, local rules that govern the individual placement decisions. In order to do so, we give an in-depth analysis of a series of emergent effects that are not directly encoded into the placement algorithm but stem from its collective dynamics.",
"As cloud services grow to span more and more globally distributed datacenters, there is an increasingly urgent need for automated mechanisms to place application data across these datacenters. This placement must deal with business constraints such as WAN bandwidth costs and datacenter capacity limits, while also minimizing user-perceived latency. The task of placement is further complicated by the issues of shared data, data inter-dependencies, application changes and user mobility. We document these challenges by analyzing month-long traces from Microsoft's Live Messenger and Live Mesh, two large-scale commercial cloud services. We present Volley, a system that addresses these challenges. Cloud services make use of Volley by submitting logs of datacenter requests. Volley analyzes the logs using an iterative optimization algorithm based on data access patterns and client locations, and outputs migration recommendations back to the cloud service. To scale to the data volumes of cloud service logs, Volley is designed to work in SCOPE [5], a scalable MapReduce-style platform; this allows Volley to perform over 400 machine-hours worth of computation in less than a day. We evaluate Volley on the month-long Live Mesh trace, and we find that, compared to a state-of-the-art heuristic that places data closest to the primary IP address that accesses it, Volley simultaneously reduces datacenter capacity skew by over 2×, reduces inter-datacenter traffic by over 1.8× and reduces 75th percentile user-latency by over 30 .",
"Cloud computing infrastructures are increasingly being used by network-intensive applications that transfer significant amounts of data between the nodes on which they run. This paper shows that tenants can do a better job placing applications by understanding the underlying cloud network as well as the demands of the applications. To do so, tenants must be able to quickly and accurately measure the cloud network and profile their applications, and then use a network-aware placement method to place applications. This paper describes Choreo, a system that solves these problems. Our experiments measure Amazon's EC2 and Rackspace networks and use three weeks of network data from applications running on the HP Cloud network. We find that Choreo reduces application completion time by an average of 8 -14 (max improvement: 61 ) when applications are placed all at once, and 22 -43 (max improvement: 79 ) when they arrive in real-time, compared to alternative placement schemes."
]
} |
1603.08119 | 2317398342 | Seamless computing and service sharing in community networks have been gaining momentum due to the emerging technology of community network micro-clouds (CNMCs). However, running services in CNMCs can face enormous challenges such as the dynamic nature of micro-clouds, limited capacity of nodes and links, asymmetric quality of wireless links for services, deployment mod- els based on geographic singularities rather than network QoS, and etc. CNMCs have been increasingly used by network-intensive services that exchange significant amounts of data between the nodes on which they run, therefore the performance heavily relies on the available bandwidth resource in a network. This paper proposes a novel bandwidth-aware service placement algorithm which out- performs the current random placement adopted by Guifi.net. Our preliminary results show that the proposed algorithm consistently outperforms the current random placement adopted in Guifi.net by 35 regarding its bandwidth gain. More importantly, as the number of services increases, the gain tends to increase accordingly. | There are few works that provides service placement in distributed clouds with network-aware capabilities. The work in @cite_9 proposes efficient algorithms for the placement of services in distributed cloud environment. Their algorithms need input on the status of the network, computational resources and data resources which are matched to application requirements. In @cite_16 authors propose a selection algorithm to allocate resources for service-oriented applications and the work in @cite_17 focuses on resource allocation in distributed small datacenters. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_17"
],
"mid": [
"2132479554",
"2171688583",
"1974099360"
],
"abstract": [
"We consider a system of compute and storage resources geographically distributed over a large number of locations connected via a wide-area network. By distributing the resources, latency to users can be decreased, bandwidth costs reduced and availablility increased. The challenge is to distribute services with varying characteristics among the data centers optimally. Some services are very latency sensitive, others need vast amounts of storage, and yet others are computationally complex but do not require hard deadlines on execution. We propose efficient algorithms for the placement of services to get the maximum benefit from a distributed cloud systems. The algorithms need input on the status of the network, compute resources and data resources, which are matched to application requirements. This demonstration shows how a network-aware cloud can combine all three resource types - computation, storage, and network connectivity - in distributed cloud environments. Our dynamic service placement algorithm monitors the network and data center resources in real-time. Our prototype uses the information gathered to place or migrate services to provide the best user experience for a service.",
"Service-Oriented Computing (SOC) enables the composition of loosely coupled services provided with varying Quality of Service (QoS) levels. Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. With the advent of Cloud Computing, both the number of such services and their distribution across the network are rising rapidly, increasing the impact of the network on the QoS of such compositions. Despite this, current approaches do not differentiate between the QoS of services themselves and the QoS of the network. Therefore, the computed latency differs substantially from the actual latency, resulting in suboptimal QoS for service compositions in the cloud. Thus, we propose a network-aware approach that handles the QoS of services and the QoS of the network independently. First, we build a network model in order to estimate the network latency between arbitrary services and potential users. Our selection algorithm then leverages this model to find compositions that will result in a low latency given an employed execution policy. In our evaluation, we show that our approach efficiently computes compositions with much lower latency than current approaches.",
"We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms."
]
} |
1603.08119 | 2317398342 | Seamless computing and service sharing in community networks have been gaining momentum due to the emerging technology of community network micro-clouds (CNMCs). However, running services in CNMCs can face enormous challenges such as the dynamic nature of micro-clouds, limited capacity of nodes and links, asymmetric quality of wireless links for services, deployment mod- els based on geographic singularities rather than network QoS, and etc. CNMCs have been increasingly used by network-intensive services that exchange significant amounts of data between the nodes on which they run, therefore the performance heavily relies on the available bandwidth resource in a network. This paper proposes a novel bandwidth-aware service placement algorithm which out- performs the current random placement adopted by Guifi.net. Our preliminary results show that the proposed algorithm consistently outperforms the current random placement adopted in Guifi.net by 35 regarding its bandwidth gain. More importantly, as the number of services increases, the gain tends to increase accordingly. | Regarding the service migration in distributed clouds, few works came out recently. The authors in @cite_1 and @cite_2 study the dynamic service migration problem in mobile edge-clouds that host cloud-based services at the network edge. They formulate a sequential decision making problem for service migration using the framework of Markov Decision Process (MDP) and illustrate the effectiveness of their approach by simulation using real-world mobility traces of taxis in San Francisco. The work in @cite_18 studies when services should be migrated in response to user mobility and demand variation. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_2"
],
"mid": [
"974042900",
"2951655197",
"2963508401"
],
"abstract": [
"Edge-clouds provide a promising new approach to significantly reduce network operational costs by moving computation closer to the edge. A key challenge in such systems is to decide where and when services should be migrated in response to user mobility and demand variation. The objective is to optimize operational costs while providing rigorous performance guarantees. In this paper, we model this as a sequential decision making Markov Decision Problem (MDP). However, departing from traditional solution methods (such as dynamic programming) that require extensive statistical knowledge and are computationally prohibitive, we develop a novel alternate methodology. First, we establish an interesting decoupling property of the MDP that reduces it to two independent MDPs on disjoint state spaces. Then, using the technique of Lyapunov optimization over renewals, we design an online control algorithm for the decoupled problem that is provably cost-optimal. This algorithm does not require any statistical knowledge of the system parameters and can be implemented efficiently. We validate the performance of our algorithm using extensive trace-driven simulations. Our overall approach is general and can be applied to other MDPs that possess a similar decoupling property.",
"We study the dynamic service migration problem in mobile edge-clouds that host cloud-based services at the network edge. This offers the benefits of reduction in network overhead and latency but requires service migrations as user locations change over time. It is challenging to make these decisions in an optimal manner because of the uncertainty in node mobility as well as possible non-linearity of the migration and transmission costs. In this paper, we formulate a sequential decision making problem for service migration using the framework of Markov Decision Process (MDP). Our formulation captures general cost models and provides a mathematical framework to design optimal service migration policies. In order to overcome the complexity associated with computing the optimal policy, we approximate the underlying state space by the distance between the user and service locations. We show that the resulting MDP is exact for uniform one-dimensional mobility while it provides a close approximation for uniform two-dimensional mobility with a constant additive error term. We also propose a new algorithm and a numerical technique for computing the optimal solution which is significantly faster in computation than traditional methods based on value or policy iteration. We illustrate the effectiveness of our approach by simulation using real-world mobility traces of taxis in San Francisco.",
"Seamless computing and data access is enabled by the emerging technology of mobile micro-clouds (MMCs). Different from traditional centralized clouds, an MMC is typically connected directly to a wireless base-station and provides services to a small group of users, which allows users to have instantaneous access to cloud services. Due to the limited coverage area of base-stations and the dynamic nature of mobile users, network background traffic, etc., the question of where to place the services to cope with these dynamics arises. In this paper, we focus on dynamic service placement for MMCs. We consider the case where there is an underlying mechanism to predict the future costs of service hosting and migration, and the prediction error is assumed to be bounded. Our goal is to find the optimal service placement sequence which minimizes the average cost over a given time. To solve this problem, we first propose a method which solves for the optimal placement sequence for a specific look-ahead time-window, based on the predicted costs in this time-window. We show that this problem is equivalent to a shortest-path problem and propose an algorithm with polynomial time-complexity to find its solution. Then, we propose a method to find the optimal look-ahead window size, which minimizes an upper bound of the average cost. Finally, we evaluate the effectiveness of the proposed approach by simulations with realworld user-mobility traces."
]
} |
1603.08119 | 2317398342 | Seamless computing and service sharing in community networks have been gaining momentum due to the emerging technology of community network micro-clouds (CNMCs). However, running services in CNMCs can face enormous challenges such as the dynamic nature of micro-clouds, limited capacity of nodes and links, asymmetric quality of wireless links for services, deployment mod- els based on geographic singularities rather than network QoS, and etc. CNMCs have been increasingly used by network-intensive services that exchange significant amounts of data between the nodes on which they run, therefore the performance heavily relies on the available bandwidth resource in a network. This paper proposes a novel bandwidth-aware service placement algorithm which out- performs the current random placement adopted by Guifi.net. Our preliminary results show that the proposed algorithm consistently outperforms the current random placement adopted in Guifi.net by 35 regarding its bandwidth gain. More importantly, as the number of services increases, the gain tends to increase accordingly. | While our focus in this paper is to design a low-complexity service placement heuristic for community network clouds to maximise bandwidth, another closely related work is @cite_4 which proposed several algorithms that minimize the coordination and overlay cost along a network. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2039367411"
],
"abstract": [
"Community Cloud computing is a new trend on cloud computing that aims to build service infrastructures upon Wireless Community Networks taking advantage of underused community physical resources. Service allocation protocols are a key design challenge that all cloud systems must properly address to optimize resource utilization. They are specially important when cloud services require a Quality of Service (QoS) and network stability or performance (delay, jitter, minimum bandwidth) cannot be guaranteed a-priory. This work presents a study that tries to understand how to address cloud service deployments in such scenario. In particular, we start proposing an allocation algorithm to find optimal solutions when there is a central authority that coordinates the process. These solutions optimize the communication cost in two ways: (1) minimizing the service overlay diameter and, (2) minimizing the coordination cost along the network. Based on the study of the algorithm and the experimental simulations, we study the variables that outcome optimal service allocations to the detriment of other solutions. We verify these findings using data mining techniques. Researchers can take advantage of the simulation results and our observations to design more reliable distributed algorithms able to dynamically self-adapt to network changes."
]
} |
1603.07933 | 2950651751 | Social media platforms provide several social interactional features. Due to the large scale reach of social media, these interactional features help enable various types of political discourse. Constructive and diversified discourse is important for sustaining healthy communities and reducing the impact of echo chambers. In this paper, we empirically examine the role of a newly introduced Twitter feature, 'quote retweets' (or 'quote RTs') in political discourse, specifically whether it has led to improved, civil, and balanced exchange. Quote RTs allow users to quote the tweet they retweet, while adding a short comment. Our analysis using content, network and crowd labeled data indicates that the feature has increased political discourse and its diffusion, compared to existing features. We discuss the implications of our findings in understanding and reducing online polarization. | However, despite these affordances, social media platforms have undergone considerable scrutiny in terms of their ability to allow diversified exchange of thoughts and opinions, especially around politically oriented topics @cite_0 . The increasingly involved role of algorithms in social media content personalization as well as varied levels of engagement promoted by different interactional features of the platforms have been argued to be behind the creation of ideological echo chambers'' @cite_20 . Our work is situated in this body of work wherein we examine how the newly introduced quote RT' feature on Twitter is utilized for political discourse and for sharing of political opinions in the larger social network. | {
"cite_N": [
"@cite_0",
"@cite_20"
],
"mid": [
"2125815607",
"2137809006"
],
"abstract": [
"In the last decade, blogs have exploded in number, popularity and scope. However, many commentators and researchers speculate that blogs isolate readers in echo chambers, cutting them off from dissenting opinions. Our empirical paper tests this hypothesis. Using a hand-coded sample of over 1,000 comments from 33 of the worlds top blogs, we (nd that agreement outnumbers disagreement in blog comments by more than 3 to 1. However, this ratio depends heavily on a blogs genre, varying between 2 to 1 and 9 to 1. Using these hand-coded blog comments as input, we also show that natural language processing techniques can identify the linguistic markers of agreement. We conclude by applying our empirical and algorithmic (ndings to practical implications for blogs, and discuss the many questions raised by our work.",
"A review of research suggests that the desire for opinion reinforcement may play a more important role in shaping individuals’ exposure to online political information than an aversion to opinion challenge. The article tests this idea using data collected via a webadministered behavior-tracking study with subjects recruited from the readership of 2 partisan online news sites (N = 727). The results demonstrate that opinion-reinforcing information promotes news story exposure while opinion-challenging information makes exposure only marginally less likely. The influence of both factors is modest, but opinionreinforcing information is a more important predictor. Having decided to view a news story, evidence of an aversion to opinion challenges disappears: There is no evidence that individuals abandon news stories that contain information with which they disagree. Implications and directions for future research are discussed."
]
} |
1603.07933 | 2950651751 | Social media platforms provide several social interactional features. Due to the large scale reach of social media, these interactional features help enable various types of political discourse. Constructive and diversified discourse is important for sustaining healthy communities and reducing the impact of echo chambers. In this paper, we empirically examine the role of a newly introduced Twitter feature, 'quote retweets' (or 'quote RTs') in political discourse, specifically whether it has led to improved, civil, and balanced exchange. Quote RTs allow users to quote the tweet they retweet, while adding a short comment. Our analysis using content, network and crowd labeled data indicates that the feature has increased political discourse and its diffusion, compared to existing features. We discuss the implications of our findings in understanding and reducing online polarization. | Prior work has also explored the ways and practices that drive the usage of different interactional features of social media platforms for discourse. Retweeting behavior has been the most extensively studied. In an early work, @cite_10 performed a user study to understand why users retweet and found out that users have a wide range of use cases for retweeting, ranging from personal gains (Ex. increasing followers) to spreading information. For a detailed survey on various studies covering retweet behavior, refer to @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2399902206",
"2001653897"
],
"abstract": [
"Arguably one of the most important features of Twitter is the support for “retweets” or messages re-posted verbatim by a user that were originated by someone else. Despite the fact that retweets are routinely studied and reported, many important questions remain about user motivation for their use and their significance. In this paper we answer the question of what users indicate when they retweet. We do so in a comprehensive fashion, by employing a user survey, a study of user profiles, and a meta-review of over 100 research publications from three related major conferences. Our findings indicate that retweeting indicates not only interest in a message, but also trust in the message and the originator, and agreement with the message contents. However, the findings are significantly weaker for journalists, some of whom beg to differ declaring so in their own user profiles. On the other hand, the inclusion of hashtags strengthens the signal of agreement, especially when the hashtags are related to politics. While in the past there have been additional claims in the literature about possible reasons for retweeting, many of them are not supported, especially given the technical changes introduced recently by Twitter.",
"Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice."
]
} |
1603.07933 | 2950651751 | Social media platforms provide several social interactional features. Due to the large scale reach of social media, these interactional features help enable various types of political discourse. Constructive and diversified discourse is important for sustaining healthy communities and reducing the impact of echo chambers. In this paper, we empirically examine the role of a newly introduced Twitter feature, 'quote retweets' (or 'quote RTs') in political discourse, specifically whether it has led to improved, civil, and balanced exchange. Quote RTs allow users to quote the tweet they retweet, while adding a short comment. Our analysis using content, network and crowd labeled data indicates that the feature has increased political discourse and its diffusion, compared to existing features. We discuss the implications of our findings in understanding and reducing online polarization. | Finally, although interactional features like retweets in general have enabled rich discourse on a variety of topics, they have also been employed for deviant online activities. Being an open forum with an ease to conceive personal identity has led to the rise of trolls on different online platforms, Ex., Wikipedia @cite_16 , forum comments @cite_6 @cite_18 , online games @cite_11 , and Facebook @cite_21 @cite_12 . Building on this line of research, one of the contributions of this paper is characterizing the civility of political discourse on Twitter via the quote RT feature. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2037853032",
"2101892263",
"2113439836",
"2137188527",
"2160600179"
],
"abstract": [
"",
"In an effort to clean up user comment sections, news organizations have turned to Facebook, the world's largest social network site, as a way to make users more identifiable and accountable for the content they produce. It is hypothesized that users leaving comments via their Facebook profile will be less likely to engage in uncivil and impolite discussion, even when it comes to discussing politically sensitive and potentially divisive issues. By analysing the content of discussion as it occurs in response to political news content on the Washington Post Facebook, and comparing it to that which occurs on the Washington Post website where users are afforded a relatively high level of anonymity, the present study determines the extent to which Facebook increases the level of civility and impoliteness in an area of political discussion renowned for uncivil and impolite communicative behaviour. In line with earlier theories of social interaction, the paper finds that political discussion on The Washington Pos...",
"This study explores the issue of “civility” in anonymous comments posted to news media websites. A textual analysis of journalistic essays about the issue and more than 900 audience-member response...",
"Research on trolls is scarce, but their activities challenge online communities; one of the main challenges of the Wikipedia community is to fight against vandalism and trolls. This study identifies Wikipedia trollsâ behaviours and motivations, and compares and contrasts hackers with trolls; it extends our knowledge about this type of vandalism and concludes that Wikipedia trolls are one type of hacker. This study reports that boredom, attention seeking, and revenge motivate trolls; they regard Wikipedia as an entertainment venue, and find pleasure from causing damage to the community and other people. Findings also suggest that trollsâ behaviours are characterized as repetitive, intentional, and harmful actions that are undertaken in isolation and under hidden virtual identities, involving violations of Wikipedia policies, and consisting of destructive participation in the community.",
"This study explores use of the social network site Facebook for online political discussion. Online political discussion has been criticized for isolating disagreeing persons from engaging in discussion and for having an atmosphere of uncivil discussion behavior. Analysis reveals the participation of disagreeing parties within the discussion with the large majority of posters (73 percent) expressing support for the stated position of the Facebook group, and a minority of posters (17 percent) expressing opposition to the position of the group. Despite the presence of uncivil discussion posting within the Facebook group, the large majority of discussion participation (75 percent) is devoid of flaming. Results of this study provide important groundwork and raise new questions for study of online political discussion as it occurs in the emergent Internet technologies of social network sites.",
"One problem facing players of competitive games is negative, or toxic, behavior. League of Legends, the largest eSport game, uses a crowdsourcing platform called the Tribunal to judge whether a reported toxic player should be punished or not. The Tribunal is a two stage system requiring reports from those players that directly observe toxic behavior, and human experts that review aggregated reports. While this system has successfully dealt with the vague nature of toxic behavior by majority rules based on many votes, it naturally requires tremendous cost, time, and human efforts. In this paper, we propose a supervised learning approach for predicting crowdsourced decisions on toxic behavior with large-scale labeled data collections; over 10 million user reports involved in 1.46 million toxic players and corresponding crowdsourced decisions. Our result shows good performance in detecting overwhelmingly majority cases and predicting crowdsourced decisions on them. We demonstrate good portability of our classifier across regions. Finally, we estimate the practical implications of our approach, potential cost savings and victim protection."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | Scale space theory @cite_25 @cite_13 @cite_22 @cite_47 has a long and rich history as a theory for analyzing images, and we only provide brief highlights. The basic idea is that an image consists of structures at various different scales (e.g., a leaf of a tree exists at a different scale than a forest), and thus to analyze an image without a-priori knowledge, it is necessary to consider the image at scales. This is accomplished by blurring the image at a continuum of kernel sizes. The most common kernel is a Gaussian, which is known to be the only scale space satisfying certain axioms such as not introducing any new features as the image is blurred @cite_9 . Scale space has been used to analyze structures in images (e.g., @cite_14 @cite_15 @cite_9 @cite_39 ). This has had wide ranging applications in stereo and optical flow @cite_28 , reconstruction @cite_37 @cite_16 , key -point detection in wide-baseline matching @cite_11 , design of descriptors for matching @cite_23 , shape matching @cite_21 , and curve evolution @cite_2 , among others. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_22",
"@cite_15",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_47",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2169340692",
"2153645845",
"1752870650",
"",
"2118877769",
"2117612778",
"2007206727",
"",
"2016120301",
"",
"2083209089",
"2210074228",
"",
"2022735534",
""
],
"abstract": [
"In computer vision, the one-parameter family of images obtained from the Laplacian-of-a-Gaussian-filtered version of the image, parameterized by the width of the Gaussian, has proved to be a useful data structure for the extraction of feature data. In particular, the zero crossings of this so-called scale-space data are associated with edges and have been proposed by D. Marr (1982) and others as the basis of a representation of the image data. The question arises as to whether the representation is complete and stable. The authors survey some of the studies and results related to these questions as well as several studies that attempt reconstructions based on this or related representations. They formulate a novel method for reconstruction from zero crossings in scale space that is based on minimizing equation error, and they present results showing that the reconstruction is possible but can be unstable. They further show that the method applies when gradient data along the zero crossings are included in the representation, and they demonstrate empirically that the reconstruction is then stable. >",
"We investigate the “deep structure” of a scale-space image. The emphasis is on topology, i.e. we concentrate on critical points—points with vanishing gradient—and top-points—critical points with degenerate Hessian—and monitor their displacements, respectively generic morsifications in scale-space. Relevant parts of catastrophe theory in the context of the scale-space paradigm are briefly reviewed, and subsequently rewritten into coordinate independent form. This enables one to implement topological descriptors using a conveniently defined coordinate system.",
"For grey-value images, it is well accepted that the neighborhood rather than the pixel carries the geometrical interpretation. Interestingly the spatial configuration of the neighborhood is the basis for the perception of humans. Common practise in color image processing, is to use the color information without considering the spatial structure. We aim at a physical basis for the local interpretation of color images. We propose a framework for spatial color measurement, based on the Gaussian scale-space theory. We consider a Gaussian color model, which inherently uses the spatial and color information in an integrated model. The framework is well-founded in physics as well as in measurement science. The framework delivers sound and robust spatial color invariant features. The usefulness of the proposed measurement framework is illustrated by edge detection, where edges are discriminated as shadow, highlight, or object boundary. Other applications of the framework include color invariant image retrieval and color constant edge detection.",
"",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system.",
"A basic and extensive treatment of discrete aspects of the scale-space theory is presented. A genuinely discrete scale-space theory is developed and its connection to the continuous scale-space theory is explained. Special attention is given to discretization effects, which occur when results from the continuous scale-space theory are to be implemented computationally. The 1D problem is solved completely in an axiomatic manner. For the 2D problem, the author discusses how the 2D discrete scale space should be constructed. The main results are as follows: the proper way to apply the scale-space theory to discrete signals and discrete images is by discretization of the diffusion equation, not the convolution integral; the discrete scale space obtained in this way can be described by convolution with the kernel, which is the discrete analog of the Gaussian kernel, a scale-space implementation based on the sampled Gaussian kernel might lead to undesirable effects and computational problems, especially at fine levels of scale; the 1D discrete smoothing transformations can be characterized exactly and a complete catalogue is given; all finite support 1D discrete smoothing transformations arise from repeated averaging over two adjacent elements (the limit case of such an averaging process is described); and the symmetric 1D discrete smoothing kernels are nonnegative and unimodal, in both the spatial and the frequency domain. >",
"One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks.",
"",
"Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale-selection necessary for these pixels, when dense, scale-invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low-contrast areas, can be different; selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single-scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low-dimensional, linear subspace. A subspace-to-point mapping may further be used to produce a novel descriptor representation, the Scale-Less SIFT (SLS), as an alternative to single-scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods.",
"",
"In this study, we propose the application of principal components analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. The translation of an input image into scale-space is a continuous operation, which requires the extension of conventional finite matrix- based PCA to an infinite number of dimensions. In this study, we use spectral decomposition to resolve this infinite eigenproblem by integration and we propose an approximate solution based on polynomial equations. To clarify its eigensolutions, we apply spectral decomposition to the Gaussian scale-space and scale-normalized Laplacian of Gaussian (LoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and scale-normalized LoG images, where we demonstrate that the accuracy of these images can be very high when calculating an arbitrary scale using a simple linear combination. We also propose a new Scale Invariant Feature Transform (SIFT) detector as a more practical example.",
"We present a variational approach for surface reconstruction from a set of oriented points with scale information. We focus particularly on scenarios with non-uniform point densities due to images taken from different distances. In contrast to previous methods, we integrate the scale information in the objective and globally optimize the signed distance function of the surface on a balanced octree grid. We use a finite element discretization on the dual structure of the octree minimizing the number of variables. The tetrahedral mesh is generated efficiently from the dual structure, and also memory efficiency is optimized, such that robust data terms can be used even on very large scenes. The surface normals are explicitly optimized and used for surface extraction to improve the reconstruction at edges and corners.",
"",
"In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appers to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous tack of homogeneous layer, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem (“light and dark blobs”) disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blod has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution. The natural scale along the resolution axis (leading to an informationally uniform sampling density) is logarithmic, thus the structure is apt for the description of size invariances.",
""
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | Gaussian scale spaces have also been used in image segmentation, most notably in texture segmentation @cite_30 @cite_44 @cite_36 , which occur frequently in general images @cite_46 @cite_4 , where the need for scale information is cogent. While these methods naturally capture important scale information, they use a scale space defined on the entire image, which blurs across segmentation boundaries. Anisotropic scale spaces @cite_45 @cite_1 have been applied to reduce blurring across boundaries, but this could blur across regions where edges are not salient. Recently, @cite_24 have addressed this issue by using discrete scales computed locally within the evolving regions of the segmentation. However, only a discrete number of scales are used and the method does not exhibit a coarse-to-fine property, which is the focus of this work. Such methods for segmentation have been numerically implemented with various optimization methods, including level sets @cite_26 , and more recently convex optimization methods @cite_6 . The energy we consider is not convex, and thus we rely on gradient descent on curves. The energy we consider involves optimization with partial differential equation (PDE) constraints, and thus we apply optimization techniques from @cite_48 @cite_3 . | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_48",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_44",
"@cite_24",
"@cite_45",
"@cite_46"
],
"mid": [
"2108976589",
"1991113069",
"1991367009",
"2119943055",
"2104028907",
"2040378863",
"2544610656",
"",
"2622382464",
"1931273051",
"",
"2108944208"
],
"abstract": [
"Texture segmentation is a difficult problem, as is apparent from camouflage pictures. A textured region can contain texture elements of various sizes, each of which can itself be textured. We approach this problem using a bottom-up aggregation framework that combines structural characteristics of texture elements with filter responses. Our process adaptively identifies the shape of texture elements and characterize them by their size, aspect ratio, orientation, brightness, etc., and then uses various statistics of these properties to distinguish between different textures. At the same time our process uses the statistics of filter responses to characterize textures. In our process the shape measures and the filter responses crosstalk extensively. In addition, a top-down cleaning process is applied to avoid mixing the statistics of neighboring segments. We tested our algorithm on real images and demonstrate that it can accurately segment regions that contain challenging textures.",
"We devise new numerical algorithms, called PSC algorithms, for following fronts propagating with curvature-dependent speed. The speed may be an arbitrary function of curvature, and the front also can be passively advected by an underlying flow. These algorithms approximate the equations of motion, which resemble Hamilton-Jacobi equations with parabolic right-hand sides, by using techniques from hyperbolic conservation laws. Non-oscillatory schemes of various orders of accuracy are used to solve the equations, providing methods that accurately capture the formation of sharp gradients and cusps in the moving fronts. The algorithms handle topological merging and breaking naturally, work in any number of space dimensions, and do not require that the moving surface be written as a function. The methods can be also used for more general Hamilton-Jacobi-type problems. We demonstrate our algorithms by computing the solution to a variety of surface motion problems.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a regularized demodulation algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the weighted curve evolution scheme that enhances the Region Competition Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods.",
"We consider the problem of segmenting an image through the minimization of an energy criterion involving region and boundary functionals. We show that one can go from one class to the otherby solving Poisson's orHelmholtz's equation with well-chosen boundar y conditions. Using this equivalence, we study the case of a large class of region functionals by standard methods of the calculus of variations and derive the corresponding Euler-Lagrange equations. We revisit this problem using the notion of a shape derivative and show that the same equations can be elegantly derived without going through the unnatural step of converting the region integrals into boundary integrals. We also define a larger class of region functionals based on the estimation and comparison to a prototype of the probability density distribution of image features and show how the shape derivative tool allows us to easily compute the corresponding Gateaux derivatives and Euler-Lagrange equations. Finally we apply this new functional to the problem of regions segmentation in sequences of color images. We briefly describe our numerical scheme and show some experimental results.",
"This paper explores various aspects of the image decomposition problem using modern variational techniques. We aim at splitting an original image f into two components u and ?, where u holds the geometrical information and ? holds the textural information. The focus of this paper is to study different energy terms and functional spaces that suit various types of textures. Our modeling uses the total-variation energy for extracting the structural part and one of four of the following norms for the textural part: L2, G, L1 and a new tunable norm, suggested here for the first time, based on Gabor functions. Apart from the broad perspective and our suggestions when each model should be used, the paper contains three specific novelties: first we show that the correlation graph between u and ? may serve as an efficient tool to select the splitting parameter, second we propose a new fast algorithm to solve the TV ? L1 minimization problem, and third we introduce the theory and design tools for the TV-Gabor model.",
"In this work we revisit the Mumford-Shah functional, one of the most studied variational approaches to image segmentation. The contribution of this paper is to propose an algorithm which allows to minimize a convex relaxation of the Mumford-Shah functional obtained by functional lifting. The algorithm is an efficient primal-dual projection algorithm for which we prove convergence. In contrast to existing algorithms for minimizing the full Mumford-Shah this is the first one which is based on a convex relaxation. As a consequence the computed solutions are independent of the initialization. Experimental results confirm that the proposed algorithm determines smooth approximations while preserving discontinuities of the underlying signal.",
"",
"We propose a new multiscale image segmentation model, based on the active contour snake model and the Polyakov action. The concept of scale, general issue in physics and signal processing, is introduced in the active contour model, which is a well-known image segmentation model that consists of evolving a contour in images toward the boundaries of objects. The Polyakov action, introduced in image processing by Sochen-Kimmel-Malladi in (1998), provides an efficient mathematical framework to define a multiscale segmentation model because it generalizes the concept of harmonic maps embedded in higher-dimensional Riemannian manifolds such as multiscale images. Our multiscale segmentation model, unlike classical multiscale segmentations which work scale by scale to speed up the segmentation process, uses all scales simultaneously, i.e. the whole scale space, to introduce the geometry of multiscale images in the segmentation process. The extracted multiscale structures will be useful to efficiently improve the robustness and the performance of standard shape analysis techniques such as shape recognition and shape registration. Another advantage of our method is to use not only the Gaussian scale space but also many other multiscale spaces such as the Perona-Malik scale space, the curvature scale space or the Beltrami scale space. Finally, this multiscale segmentation technique is coupled with a multiscale edge detecting function based on the gradient vector flow model, which is able to extract convex and concave object boundaries independent of the initial condition. We apply our multiscale segmentation model on a synthetic image and a medical image.",
"We propose new dense descriptors for texture segmentation. Given a region of arbitrary shape in an image, these descriptors are formed from shape-dependent scale spaces of oriented gradients. These scale spaces are defined by Poisson-like partial differential equations. A key property of our new descriptors is that they do not aggregate image data across the boundary of the region, in contrast to existing descriptors based on aggregation of oriented gradients. As an example, we show how the descriptor can be incorporated in a Mumford-Shah energy for texture segmentation. We test our method on several challenging datasets for texture segmentation and textured object tracking. Experiments indicate that our descriptors lead to more accurate segmentation than non-shape dependent descriptors and the state-of-the-art in texture segmentation.",
"",
"We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, which prematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarse-to-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to cross-scale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | Using an energy that incorporates a discrete number of scales @cite_24 (middle) in this example captures the desired segmentation, however it does not exhibit coarse-to-fine behavior. | {
"cite_N": [
"@cite_24"
],
"mid": [
"1931273051"
],
"abstract": [
"We propose new dense descriptors for texture segmentation. Given a region of arbitrary shape in an image, these descriptors are formed from shape-dependent scale spaces of oriented gradients. These scale spaces are defined by Poisson-like partial differential equations. A key property of our new descriptors is that they do not aggregate image data across the boundary of the region, in contrast to existing descriptors based on aggregation of oriented gradients. As an example, we show how the descriptor can be incorporated in a Mumford-Shah energy for texture segmentation. We test our method on several challenging datasets for texture segmentation and textured object tracking. Experiments indicate that our descriptors lead to more accurate segmentation than non-shape dependent descriptors and the state-of-the-art in texture segmentation."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | Since we apply our method to the problem of segmenting moving objects in video based on motion, we highlight some aspects of that literature most relevant to this work. Methods for motion segmentation are based on optical flow (e.g., @cite_0 ). Piecewise parametric models for motion of regions in segmentation are used in e.g., @cite_40 @cite_18 . Non-parametric warps are used for motion models (e.g., @cite_7 @cite_17 @cite_12 ). Our goal here is to estimate motion, but rather we use existing techniques for motion estimation, and improve the segmentation of regions by replacing the data term with our novel energy. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_0",
"@cite_40",
"@cite_12",
"@cite_17"
],
"mid": [
"2114190460",
"2076756823",
"2033959528",
"2142912032",
"2201368473",
"2157130620"
],
"abstract": [
"We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion.",
"Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.",
"The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.",
"We describe a system for representing moving images with sets of overlapping layers. Each layer contains an intensity map that defines the additive values of each pixel, along with an alpha map that serves as a mask indicating the transparency. The layers are ordered in depth and they occlude each other in accord with the rules of compositing. Velocity maps define how the layers are to be warped over time. The layered representation is more flexible than standard image transforms and can capture many important properties of natural image sequences. We describe some methods for decomposing image sequences into layers using motion analysis, and we discuss how the representation may be used for image coding and other applications. >",
"We propose a method to detect disocclusion in video sequences of three-dimensional scenes and to partition the disoccluded regions into objects, defined by coherent deformation corresponding to surfaces in the scene. Our method infers deformation fields that are piecewise smooth by construction without the need for an explicit regularizer and the associated choice of weight. It then partitions the disoccluded region and groups its components with objects by leveraging on the complementarity of motion and appearance cues: Where appearance changes within an object, motion can usually be reliably inferred and used for grouping. Where appearance is close to constant, it can be used for grouping directly. We integrate both cues in an energy minimization framework, incorporate prior assumptions explicitly into the energy, and propose a numerical scheme.",
"Layered models allow scene segmentation and motion estimation to be formulated together and to inform one another. Traditional layered motion methods, however, employ fairly weak models of scene structure, relying on locally connected Ising Potts models which have limited ability to capture long-range correlations in natural scenes. To address this, we formulate a fully-connected layered model that enables global reasoning about the complicated segmentations of real objects. Optimization with fully-connected graphical models is challenging, and our inference algorithm leverages recent work on efficient mean field updates for fully-connected conditional random fields. These methods can be implemented efficiently using high-dimensional Gaussian filtering. We combine these ideas with a layered flow model, and find that the long-range connections greatly improve segmentation into figure-ground layers when compared with locally connected MRF models. Experiments on several benchmark datasets show that the method can recover fine structures and large occlusion regions, with good flow accuracy and much lower computational cost than previous locally-connected layered models."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | scale space theory - @cite_25 , @cite_9 , @cite_22 , @cite_47 , analysis of image - @cite_39 | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_39",
"@cite_47",
"@cite_25"
],
"mid": [
"1752870650",
"2117612778",
"",
"2083209089",
"2022735534"
],
"abstract": [
"For grey-value images, it is well accepted that the neighborhood rather than the pixel carries the geometrical interpretation. Interestingly the spatial configuration of the neighborhood is the basis for the perception of humans. Common practise in color image processing, is to use the color information without considering the spatial structure. We aim at a physical basis for the local interpretation of color images. We propose a framework for spatial color measurement, based on the Gaussian scale-space theory. We consider a Gaussian color model, which inherently uses the spatial and color information in an integrated model. The framework is well-founded in physics as well as in measurement science. The framework delivers sound and robust spatial color invariant features. The usefulness of the proposed measurement framework is illustrated by edge detection, where edges are discriminated as shadow, highlight, or object boundary. Other applications of the framework include color invariant image retrieval and color constant edge detection.",
"A basic and extensive treatment of discrete aspects of the scale-space theory is presented. A genuinely discrete scale-space theory is developed and its connection to the continuous scale-space theory is explained. Special attention is given to discretization effects, which occur when results from the continuous scale-space theory are to be implemented computationally. The 1D problem is solved completely in an axiomatic manner. For the 2D problem, the author discusses how the 2D discrete scale space should be constructed. The main results are as follows: the proper way to apply the scale-space theory to discrete signals and discrete images is by discretization of the diffusion equation, not the convolution integral; the discrete scale space obtained in this way can be described by convolution with the kernel, which is the discrete analog of the Gaussian kernel, a scale-space implementation based on the sampled Gaussian kernel might lead to undesirable effects and computational problems, especially at fine levels of scale; the 1D discrete smoothing transformations can be characterized exactly and a complete catalogue is given; all finite support 1D discrete smoothing transformations arise from repeated averaging over two adjacent elements (the limit case of such an averaging process is described); and the symmetric 1D discrete smoothing kernels are nonnegative and unimodal, in both the spatial and the frequency domain. >",
"",
"In this study, we propose the application of principal components analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. The translation of an input image into scale-space is a continuous operation, which requires the extension of conventional finite matrix- based PCA to an infinite number of dimensions. In this study, we use spectral decomposition to resolve this infinite eigenproblem by integration and we propose an approximate solution based on polynomial equations. To clarify its eigensolutions, we apply spectral decomposition to the Gaussian scale-space and scale-normalized Laplacian of Gaussian (LoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and scale-normalized LoG images, where we demonstrate that the accuracy of these images can be very high when calculating an arbitrary scale using a simple linear combination. We also propose a new Scale Invariant Feature Transform (SIFT) detector as a more practical example.",
"In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appers to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous tack of homogeneous layer, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem (“light and dark blobs”) disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blod has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution. The natural scale along the resolution axis (leading to an informationally uniform sampling density) is logarithmic, thus the structure is apt for the description of size invariances."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | scale space used in optical flow, stereo, reconstruction, CNNs; matching SIFT; scaleless sift - @cite_23 ; @cite_21 - heat kernels signatures for shape matching; local shape descriptor; curvature scale space for matching @cite_35 , sapiro - affine invariant curve evolution | {
"cite_N": [
"@cite_35",
"@cite_21",
"@cite_23"
],
"mid": [
"",
"2007206727",
"2016120301"
],
"abstract": [
"",
"One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks.",
"Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale-selection necessary for these pixels, when dense, scale-invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low-contrast areas, can be different; selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single-scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low-dimensional, linear subspace. A subspace-to-point mapping may further be used to produce a novel descriptor representation, the Scale-Less SIFT (SLS), as an alternative to single-scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | scale spaces in segmentation: @cite_44 , @cite_36 : blur across boundaries; @cite_30 - multigrid scheme; @cite_24 - no coarse-to-fine property; | {
"cite_N": [
"@cite_44",
"@cite_30",
"@cite_36",
"@cite_24"
],
"mid": [
"2622382464",
"2108976589",
"2119943055",
"1931273051"
],
"abstract": [
"We propose a new multiscale image segmentation model, based on the active contour snake model and the Polyakov action. The concept of scale, general issue in physics and signal processing, is introduced in the active contour model, which is a well-known image segmentation model that consists of evolving a contour in images toward the boundaries of objects. The Polyakov action, introduced in image processing by Sochen-Kimmel-Malladi in (1998), provides an efficient mathematical framework to define a multiscale segmentation model because it generalizes the concept of harmonic maps embedded in higher-dimensional Riemannian manifolds such as multiscale images. Our multiscale segmentation model, unlike classical multiscale segmentations which work scale by scale to speed up the segmentation process, uses all scales simultaneously, i.e. the whole scale space, to introduce the geometry of multiscale images in the segmentation process. The extracted multiscale structures will be useful to efficiently improve the robustness and the performance of standard shape analysis techniques such as shape recognition and shape registration. Another advantage of our method is to use not only the Gaussian scale space but also many other multiscale spaces such as the Perona-Malik scale space, the curvature scale space or the Beltrami scale space. Finally, this multiscale segmentation technique is coupled with a multiscale edge detecting function based on the gradient vector flow model, which is able to extract convex and concave object boundaries independent of the initial condition. We apply our multiscale segmentation model on a synthetic image and a medical image.",
"Texture segmentation is a difficult problem, as is apparent from camouflage pictures. A textured region can contain texture elements of various sizes, each of which can itself be textured. We approach this problem using a bottom-up aggregation framework that combines structural characteristics of texture elements with filter responses. Our process adaptively identifies the shape of texture elements and characterize them by their size, aspect ratio, orientation, brightness, etc., and then uses various statistics of these properties to distinguish between different textures. At the same time our process uses the statistics of filter responses to characterize textures. In our process the shape measures and the filter responses crosstalk extensively. In addition, a top-down cleaning process is applied to avoid mixing the statistics of neighboring segments. We tested our algorithm on real images and demonstrate that it can accurately segment regions that contain challenging textures.",
"In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a regularized demodulation algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the weighted curve evolution scheme that enhances the Region Competition Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods.",
"We propose new dense descriptors for texture segmentation. Given a region of arbitrary shape in an image, these descriptors are formed from shape-dependent scale spaces of oriented gradients. These scale spaces are defined by Poisson-like partial differential equations. A key property of our new descriptors is that they do not aggregate image data across the boundary of the region, in contrast to existing descriptors based on aggregation of oriented gradients. As an example, we show how the descriptor can be incorporated in a Mumford-Shah energy for texture segmentation. We test our method on several challenging datasets for texture segmentation and textured object tracking. Experiments indicate that our descriptors lead to more accurate segmentation than non-shape dependent descriptors and the state-of-the-art in texture segmentation."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | pyramid in segmenation: @cite_4 - pyramid to reduce cost smooth across boundaries, but alginment; we provide a general energy; @cite_46 - multigrid blur across boundaries | {
"cite_N": [
"@cite_46",
"@cite_4"
],
"mid": [
"2108944208",
"1991367009"
],
"abstract": [
"We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, which prematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarse-to-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to cross-scale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | one additional point - when computing the coarse structure ; we want to make sure that the coarsing of image data does not destroy segmentation boundaries (as in pyramid-based methods @cite_4 @cite_46 ) ; thus we use the concept of shape-tailored scale space @cite_24 , and extend it to integrate all scales and have this coarse-to-fine property with the heat equation | {
"cite_N": [
"@cite_24",
"@cite_46",
"@cite_4"
],
"mid": [
"1931273051",
"2108944208",
"1991367009"
],
"abstract": [
"We propose new dense descriptors for texture segmentation. Given a region of arbitrary shape in an image, these descriptors are formed from shape-dependent scale spaces of oriented gradients. These scale spaces are defined by Poisson-like partial differential equations. A key property of our new descriptors is that they do not aggregate image data across the boundary of the region, in contrast to existing descriptors based on aggregation of oriented gradients. As an example, we show how the descriptor can be incorporated in a Mumford-Shah energy for texture segmentation. We test our method on several challenging datasets for texture segmentation and textured object tracking. Experiments indicate that our descriptors lead to more accurate segmentation than non-shape dependent descriptors and the state-of-the-art in texture segmentation.",
"We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, which prematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarse-to-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to cross-scale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates."
]
} |
1603.07745 | 2309156533 | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. | we demonstrate our method in segmenting objects from video by motion @cite_18 and show an imporvement just by integrating the scale space of motion residual; this is application where object is in clutter and it is important to capture the coarse structure | {
"cite_N": [
"@cite_18"
],
"mid": [
"2114190460"
],
"abstract": [
"We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion."
]
} |
1603.07453 | 2311363289 | This paper argues that a combined treatment of probabilities, time and actions is essential for an appropriate logical account of the notion of probability; and, based on this intuition, describes an expressive probabilistic temporal logic for reasoning about actions with uncertain outcomes. The logic is modal and higher-order: modalities annotated by actions are used to express possibility and necessity of propositions in the next states resulting from the actions, and a higher-order function is needed to express the probability operator. The proposed logic is shown to be an adequate extension of classical mathematical probability theory, and its expressiveness is illustrated through the formalization of the Monty Hall problem. | Many probabilistic logics are surveyed in @cite_2 . Among those logics, most depart from classical logic by adopting a probabilistic notion of validity and entailment. , on the other hand, remains strictly classical in this respect. The probabilistic modal logics described in Sections 4.1 and 4.2 of @cite_2 are probably the most similar to . However, they are propositional, lack the probabilistic diamond operator, and are atemporal. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1592601450"
],
"abstract": [
"Logic and probability theory are two of the main tools in the formal study of reasoning, and have been fruitfully applied in areas as diverse as philosophy, artificial intelligence, cognitive science and mathematics. This entry discusses the major proposals to combine logic and probability theory, and attempts to provide a classification of the various approaches in this rapidly developing field."
]
} |
1603.07433 | 2952934840 | Rigorously characterizing the statistical properties of cyber attacks is an important problem. In this paper, we propose the first statistical framework for rigorously analyzing honeypot-captured cyber attack data. The framework is built on the novel concept of stochastic cyber attack process , a new kind of mathematical objects for describing cyber attacks. To demonstrate use of the framework, we apply it to analyze a low-interaction honeypot dataset, while noting that the framework can be equally applied to analyze high-interaction honeypot data that contains richer information about the attacks. The case study finds, for the first time, that Long-Range Dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case study confirms that by exploiting the statistical properties (LRD in this case), it is feasible to predict cyber attacks (at least in terms of attack rate) with good accuracy. This kind of prediction capability would provide sufficient early-warning time for defenders to adjust their defense configurations or resource allocations. The idea of "gray-box" (rather than "black-box") prediction is central to the utility of the statistical framework, and represents a significant step towards ultimately understanding (the degree of) the predictability of cyber attacks. | In terms of using honeypots to improve defense, we note that honeypots have been used to help detect various attacks including DoS (denial-of-service) @cite_43 , worms @cite_41 @cite_52 , botnets @cite_33 @cite_15 @cite_24 , Internet-Messaging threats @cite_30 , generating attack signatures @cite_16 @cite_20 , and detecting targeted attacks @cite_53 . These studies are important, but are orthogonal to the focus of the present paper. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_41",
"@cite_53",
"@cite_52",
"@cite_24",
"@cite_43",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"2399654538",
"",
"1540548505",
"1510508184",
"2544088398",
"2070226850",
"",
"1988741337",
"2033811087",
"2047600068"
],
"abstract": [
"The popularity of instant messaging (IM) services has recently attracted the interest of attackers that try to send malicious URLs or files to the contact lists of compromised instant messaging accounts or clients. This work focuses on a systematic characterization of IM threats based on the information collected by HoneyBuddy, a honeypot-like infrastructure for detecting malicious activities in IM networks. HoneyBuddy finds and adds contacts to its honeypot messengers by querying popular search engines for IM contacts or by advertising its accounts on contact finder sites. Our deployment has shown that with over six thousand contacts we can gather between 50 and 110 malicious URLs per day as well as executables. Our experiments show that 21 of our collected executable samples were not gathered by other malware collection infrastructures, while 93 of the identified IM phishing domains were not recorded by popular blacklist mechanisms. Furthermore, our findings show that the malicious domains are hosted by a limited number of hosts that remain practically unchanged throughout time.",
"",
"Worm detection systems have traditionally used global strategies and focused on scan rates. The noise associated with this approach requires statistical techniques and large data sets (e.g., 220 monitored machines) to yield timely alerts and avoid false positives. Worm detection techniques for smaller local networks have not been fully explored.",
"We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network service. Traffic that is considered anomalous is processed by a \"shadow honeypot\" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular (\"production\") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20 for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives.",
"Attack detection in high-speed networks is a hot research topic. While the performance of packet oriented signature-based approaches is questionable, flow-based anomaly detection shows high false positive rates. We tried to combine both techniques. In this paper, we study the applicability of flow-based attack detection. We installed a lab environment consisting of a monitoring infrastructure and a wellcontrolled honeypot. Using correlated honeypot logs and flow signatures, we created a first set of attack pattern. The evaluation of the approach was done within our university network. On the positive side, we were able to prove the successful detection of worm attacks. Problems can occur if incomplete monitoring data is used.",
"In this paper, we propose a method to identify and group together traces left on low interaction honeypots by machines belonging to the same botnet(s) without having any a priori information at our disposal regarding these botnets. In other words, we offer a solution to detect new botnets thanks to very cheap and easily deployable solutions. The approach is validated thanks to several months of data collected with the worldwide distributed Leurre.com system. To distinguish the relevant traces from the other ones, we group them according to either the platforms, i.e. targets hit or the countries of origin of the attackers. We show that the choice of one of these two observation viewpoints dramatically influences the results obtained. Each one reveals unique botnets. We explain why. Last but not the least, we show that these botnets remain active during very long periods of times, up to 700 days, even if the traces they left are only visible from time to time.",
"",
"To date, techniques to counter cyber-attacks have predominantly been reactive; they focus on monitoring network traffic, detecting anomalies and cyber-attack traffic patterns, and, a posteriori, combating the cyber-attacks and mitigating their effects. Contrary to such approaches, we advocate proactively detecting and identifying botnets prior to their being used as part of a cyber-attack (, 2006). In this paper, we present our work on using machine learning-based classification techniques to identify the command and control (C2) traffic of IRC-based botnets - compromised hosts that are collectively commanded using Internet relay chat (IRC). We split this task into two stages: (I) distinguishing between IRC and non-IRC traffic, and (II) distinguishing between botnet and real IRC traffic. For stage I, we compare the performance of J48, naive Bayes, and Bayesian network classifiers, identify the features that achieve good overall classification accuracy, and determine the classification sensitivity to the training set size. While sensitive to the training data and the attributes used to characterize communication flows, machine learning-based classifiers show promise in identifying IRC traffic. Using classification in stage II is trickier, since accurately labeling IRC traffic as botnet and non-botnet is challenging. We are currently exploring labeling flows as suspicious and non-suspicious based on telltales of hosts being compromised",
"This paper describes a system for automated generation of attack signatures for network intrusion detection systems. Our system applies pattern-matching techniques and protocol conformance checks on multiple levels in the protocol hierarchy to network traffic captured a honeypot system. We present results of running the system on an unprotected cable modem connection for 24 hours. The system successfully created precise traffic signatures that otherwise would have required the skills and time of a security officer to inspect the traffic manually.",
"As next-generation computer worms may spread within minutes to millions of hosts, protection via human intervention is no longer an option. We discuss the implementation of SweetBait, an automated protection system that employs low- and high-interaction honeypots to recognise and capture suspicious traffic. After discarding whitelisted patterns, it automatically generates worm signatures. To provide a low response time, the signatures may be immediately distributed to network intrusion detection and prevention systems. At the same time the signatures are continuously refined for increased accuracy and lower false identification rates. By monitoring signature activity and predicting ascending or descending trends in worm virulence, we are able to sort signatures in order of urgency. As a result, the set of signatures to be monitored or filtered is managed in such a way that new and very active worms are always included in the set, while the size of the set is bounded. SweetBait is deployed on medium sized academic networks across the world and is able to react to zero-day worms within minutes. Furthermore, we demonstrate how globally sharing signatures can help immunise parts of the Internet."
]
} |
1603.07433 | 2952934840 | Rigorously characterizing the statistical properties of cyber attacks is an important problem. In this paper, we propose the first statistical framework for rigorously analyzing honeypot-captured cyber attack data. The framework is built on the novel concept of stochastic cyber attack process , a new kind of mathematical objects for describing cyber attacks. To demonstrate use of the framework, we apply it to analyze a low-interaction honeypot dataset, while noting that the framework can be equally applied to analyze high-interaction honeypot data that contains richer information about the attacks. The case study finds, for the first time, that Long-Range Dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case study confirms that by exploiting the statistical properties (LRD in this case), it is feasible to predict cyber attacks (at least in terms of attack rate) with good accuracy. This kind of prediction capability would provide sufficient early-warning time for defenders to adjust their defense configurations or resource allocations. The idea of "gray-box" (rather than "black-box") prediction is central to the utility of the statistical framework, and represents a significant step towards ultimately understanding (the degree of) the predictability of cyber attacks. | Putting data-driven analysis of cyber attacks into a broader context, we note that there have been studies on characterizing blackhole-collected traffic data (e.g., @cite_4 @cite_19 ) or one-way traffic in live networks @cite_23 . Still, there are no advanced statistical framework for analyzing such blackhole or one-way traffic data. More specifically, these studies differ from ours in (i) honeypot-captured cyber attack data includes two-way communications, whereas blackhole-collected data mainly corresponds to one-way communications; (ii) we rigorously explore statistical properties such as LRD, whereas their studies do not pursue such rigorous statistical analysis. Nevertheless, it is possible that our analysis framework can be adapted to analyze blackhole data. | {
"cite_N": [
"@cite_19",
"@cite_4",
"@cite_23"
],
"mid": [
"2000497232",
"2158060559",
"2032451229"
],
"abstract": [
"The monitoring of packets destined for routeable, yet unused, Internet addresses has proved to be a useful technique for measuring a variety of specific Internet phenomenon (e.g., worms, DDoS). In 2004, stepped beyond these targeted uses and provided one of the first generic characterizations of this non-productive traffic, demonstrating both its significant size and diversity. However, the six years that followed this study have seen tremendous changes in both the types of malicious activity on the Internet and the quantity and quality of unused address space. In this paper, we revisit the state of Internet \"background radiation\" through the lens of two unique data-sets: a five-year collection from a single unused 8 network block, and week-long collections from three recently allocated 8 network blocks. Through the longitudinal study of the long-lived block, comparisons between blocks, and extensive case studies of traffic in these blocks, we characterize the current state of background radiation specifically highlighting those features that remain invariant from previous measurements and those which exhibit significant differences. Of particular interest in this work is the exploration of address space pollution, in which significant non uniform behavior is observed. However, unlike previous observations of differences between unused blocks, we show that increasingly these differences are the result of environmental factors (e.g., misconfiguration, location), rather than algorithmic factors. Where feasible, we offer suggestions for clean up of these polluted blocks and identify those blocks whose allocations should be withheld.",
"Monitoring any portion of the Internet address space reveals incessant activity. This holds even when monitoring traffic sent to unused addresses, which we term \"background radiation. \" Background radiation reflects fundamentally nonproductive traffic, either malicious (flooding backscatter, scans for vulnerabilities, worms) or benign (misconfigurations). While the general presence of background radiation is well known to the network operator community, its nature has yet to be broadly characterized. We develop such a characterization based on data collected from four unused networks in the Internet. Two key elements of our methodology are (i) the use of filtering to reduce load on the measurement system, and (ii) the use of active responders to elicit further activity from scanners in order to differentiate different types of background radiation. We break down the components of background radiation by protocol, application, and often specific exploit; analyze temporal patterns and correlated activity; and assess variations across different networks and over time. While we find a menagerie of activity, probes from worms and autorooters heavily dominate. We conclude with considerations of how to incorporate our characterizations into monitoring and detection activities.",
"In this work we analyze a massive data-set that captures 5.23 petabytes of traffic to shed light into the composition of one-way traffic towards a large network based on a novel one-way traffic classifier. We find that one-way traffic makes a very large fraction of all traffic in terms of flows, it can be primarily attributed to malicious causes, and it has declined since 2004 because of relative decrease of scan traffic. In addition, we show how our classifier is useful for detecting network outages."
]
} |
1603.07445 | 2510318192 | Complex networks have non-trivial characteristics and appear in many real-world systems. Such networks are vitally important, but their full underlying dynamics are not completely understood. Utilizing new data sources, however, can unveil the evolution process of these networks. This study uses the recently published Reddit dataset, containing over 1.65 billion comments, to construct the largest publicly available social network corpus to date. We used this dataset to deeply examine the network evolution process, which resulted in two key observations: First, links are more likely to be created among users who join a network at a similar time. Second, the rate in which new users join a network is a central factor in molding a network's topology; i.e., different user-join patterns create different topological properties. Based on these observations, we developed the random network generation model. This model produces not only scale-free networks that have relative high clustering coefficients, but also networks that are sensitive to both the rate and the time in which users join the network. This results in a more accurate and flexible model of how complex networks evolve, one which more closely represents real-world data. | Many real-world complex networks have a community structure in which the division of network nodes into groups within which the network connections are dense, but between which they are sparser'' @cite_0 . In 2004, Newman and Girvan proposed a community detection algorithm and offered a simple method to create networks with community structure @cite_0 . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2095293504"
],
"abstract": [
"We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems."
]
} |
1603.07351 | 2303936851 | Service replication distributes an application over many processes for tolerating faults, attacks, and misbehavior among a subset of the processes. The established state-machine replication paradigm inherently requires the application to be deterministic. This paper distinguishes three models for dealing with non-determinism in replicated services, where some processes are subject to faults and arbitrary behavior (so-called Byzantine faults): first, a modular approach that does not require any changes to the potentially non-deterministic application (and neither access to its internal data); second, a master-slave approach, in which ties are broken by a leader and the other processes validate the choices of the leader; and finally, a treatment of applications that use cryptography and secret keys. Cryptographic operations and secrets must be treated specially because they require strong randomness to satisfy their goals. The paper also introduces two new protocols. The first uses the modular approach for filtering out non-de -ter -min -istic operations in an application. It ensures that all correct processes produce the same outputs and that their internal states do not diverge. The second protocol implements cryptographically secure randomness generation with a verifiable random function and is appropriate for certain security models. All protocols are described in a generic way and do not assume a particular implementation of the underlying consensus primitive. | Fault-tolerant execution on multi-core servers poses a new challenge, even for deterministic applications, because thread-level parallelism may introduce unpredictable differences between processes. Eve @cite_25 heuristically identifies groups of non-interfering operations and executes each group in parallel. Afterwards it compares the outputs, may roll back operations that lead to diverging states, or could transfer an agreed-on result state to diverging processes. Eve resembles Protocol in this sense, but lacks modularity. | {
"cite_N": [
"@cite_25"
],
"mid": [
"1850597240"
],
"abstract": [
"This paper presents Eve, a new Execute-Verify architecture that allows state machine replication to scale to multi-core servers. Eve departs from the traditional agree-execute architecture of state machine replication: replicas first execute groups of requests concurrently and then verify that they can reach agreement on a state and output produced by a correct replica; if they can not, they roll back and execute the requests sequentially. Eve minimizes divergence using application-specific criteria to organize requests into groups of requests that are unlikely to interfere. Our evaluation suggests that Eve's unique ability to combine execution independence with nondetermistic interleaving of requests enables high-performance replication for multi-core servers while tolerating a wide range of faults, including elusive concurrency bugs."
]
} |
1603.07351 | 2303936851 | Service replication distributes an application over many processes for tolerating faults, attacks, and misbehavior among a subset of the processes. The established state-machine replication paradigm inherently requires the application to be deterministic. This paper distinguishes three models for dealing with non-determinism in replicated services, where some processes are subject to faults and arbitrary behavior (so-called Byzantine faults): first, a modular approach that does not require any changes to the potentially non-deterministic application (and neither access to its internal data); second, a master-slave approach, in which ties are broken by a leader and the other processes validate the choices of the leader; and finally, a treatment of applications that use cryptography and secret keys. Cryptographic operations and secrets must be treated specially because they require strong randomness to satisfy their goals. The paper also introduces two new protocols. The first uses the modular approach for filtering out non-de -ter -min -istic operations in an application. It ensures that all correct processes produce the same outputs and that their internal states do not diverge. The second protocol implements cryptographically secure randomness generation with a verifiable random function and is appropriate for certain security models. All protocols are described in a generic way and do not assume a particular implementation of the underlying consensus primitive. | Fault-tolerant replication involving cryptographic secrets and distributed cryptography has been pioneered by Reiter and Birman @cite_38 . Many other works followed, especially protocols using threshold cryptography; an early overview of solutions in this space was given by Cachin @cite_10 . | {
"cite_N": [
"@cite_38",
"@cite_10"
],
"mid": [
"2023930035",
"2125007280"
],
"abstract": [
"We present a method for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. We also address the issue of maintaining a causal order among client requests. We illustrate a security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service and propose an approach to counter this attack. An important and novel feature of our techniques is that the client need not be able to identify or authenticate even a single server. Instead, the client is required to possess only a single public key for the service. We demonstrate the performance of our techniques with a service we have implemented using one of our protocols.",
"This paper describes an architecture for secure and fault-tolerant service replication in an asynchronous network such as the Internet, where a malicious adversary may corrupt some servers and control the network. It relies on recent protocols for randomized Byzantine agreement and for atomic broadcast, which exploit concepts from threshold cryptography. The model and its assumptions are discussed in detail and compared to related work from the last decade in the first part of this work, and an overview of the broadcast protocols in the architecture is provided. The standard approach in fault-tolerant distributed systems is to assume that at most a certain fraction of servers fails. In the second part, novel general failure patterns and corresponding protocols are introduced. The allow for realistic modeling of real-world trust assumptions, beyond (weighted) threshold models. Finally, the application of our architecture to trusted services is discussed."
]
} |
1603.07502 | 2303915641 | We study one-dimensional very singular parabolic equations with periodic boundary conditions and initial data in @math , which is the energy space. We show existence of solutions in this energy space and then we prove that they are viscosity solutions in the sense of Giga-Giga. | Pioneer work for the very singular diffusion equations including is given by Giga-Giga in @cite_18 and @cite_16 , which study the equations with spatially homogeneous external force. The authors introduced a notion of viscosity solutions, and established comparison, existence and stability of solutions in the sense of @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_16"
],
"mid": [
"2108056018",
"2013801171"
],
"abstract": [
"A new notion of solutions is introduced to study degenerate nonlinear parabolic equations in one space dimension whose diffusion effect is so strong at particular slopes of the unknowns that the equation is no longer a partial differential equation. By extending the theory of viscosity solutions, a comparison principle is established. For periodic continuous initial data a unique global continuous solution (periodic in space) is constructed. The theory applies to motion of interfacial curves by crystalline energy or more generally by anisotropic interfacial energy with corners when the curves are the graphs of functions. Even if the driving force term (homogeneous in space) exists, the initial-value problem is solvable for general nonadmissible continuous (periodic) initial data.",
"A general stability and convergence theorem is established for generalized solutions of a family of nonlinear evolution equations with non-local diffusion in one space dimension. As the first application we justify the motion by crystalline energy as a limit of regularized problems. As the sec-ond application we show the convergence of crystalline algorithm for general curvature flow equations. Our general results are also important to explain that geometric evolution of crystals depends continuously on temperature even if facets appear."
]
} |
1603.07502 | 2303915641 | We study one-dimensional very singular parabolic equations with periodic boundary conditions and initial data in @math , which is the energy space. We show existence of solutions in this energy space and then we prove that they are viscosity solutions in the sense of Giga-Giga. | The definition of solutions in @cite_18 looks different from one in the present paper. However, these two definitions are equivalent as far as one considers the equation without a spatially inhomogeneous external force. Note that the value of @math is determined explicitly when the external force is independent of the spatial variable. On the other hand, when the external force term depends on the spatial variable, one will encounter an obstacle problem. By considering the obstacle problem carefully, a comparison principle is given in @cite_25 , while a general existence result based on Perron method is given in @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_25",
"@cite_8"
],
"mid": [
"2108056018",
"",
"2060222535"
],
"abstract": [
"A new notion of solutions is introduced to study degenerate nonlinear parabolic equations in one space dimension whose diffusion effect is so strong at particular slopes of the unknowns that the equation is no longer a partial differential equation. By extending the theory of viscosity solutions, a comparison principle is established. For periodic continuous initial data a unique global continuous solution (periodic in space) is constructed. The theory applies to motion of interfacial curves by crystalline energy or more generally by anisotropic interfacial energy with corners when the curves are the graphs of functions. Even if the driving force term (homogeneous in space) exists, the initial-value problem is solvable for general nonadmissible continuous (periodic) initial data.",
"",
"A general anisotropic curvature flow equation with singular interfacial energy and spatially inhomogeneous driving force is considered for a curve given by the graph of a periodic function. We prove that the initial value problem admits a unique global-in-time viscosity solution for a general periodic continuous initial datum. The notion of a viscosity solution used here is the same as proposed by Giga, Giga and Rybka, who established a comparison principle. We construct the global-in-time solution by careful adaptation of Perron’s method."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | One of the earliest techniques is Ammann and Knight's data diversity'' @cite_25 , that aims at enabling the computation of a program in the presence of failures. The idea of data diversity is that, when a failure occurs, the input data is changed so that the new input resulting from the change does not result in the failure. The assumption is that the output based on this artificial input, through an inverse transformation, remains acceptable in the domain under consideration. The input transformations can be seen as a kind of runtime patch model. As such, the algorithm could be used to reason on the associated runtime search space. | {
"cite_N": [
"@cite_25"
],
"mid": [
"1979868167"
],
"abstract": [
"Data diversity is described, and the results of a pilot study are presented. The regions of the input space that cause failure for certain experimental programs are discussed, and data reexpression, the way in which alternate input data sets can be obtained, is examined. A description is given of the retry block which is the data-diverse equivalent of the recovery block, and a model of the retry block, together with some empirical results is presented. N-copy programming which is the data-diverse equivalent of N-version programming is considered, and a simple model and some empirical results are also given. >"
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | @cite_1 presents a language for the specification of data structure invariants. The invariant specification is used to verify and repair the consistency of data structure instances at runtime. The key difference between their work and ours is that is more generic in scope, only requiring a laps model and a laps oracle, which go beyond data structure errors and invariant restoration only. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2098010463"
],
"abstract": [
"We present a system that accepts a specification of key data structure consistency constraints, then dynamically detects and repairs violations of these constraints, enabling the program to continue to execute productively even in the face of otherwise crippling errors. Our experience using our system indicates that the specifications are relatively easy to develop once one understands the data structures. Furthermore, for our set of benchmark applications, our system can effectively repair inconsistent data structures and enable the program to continue to operate successfully."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | @cite_17 presents a technique to avoid illegal memory accesses by adding additional code around each memory operation during the compilation process. For example, the additional code verifies at runtime that the program only uses the allocated memory. If the memory access is outside the allocated memory, the access is ignored instead crashing with a segmentation fault. The two differences between this work and are: first can apply different decisions to handle a given failure (and not a single code, hard-coded in the injected code), and second, uses an oracle to reason about the viability of the decision. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1525451871"
],
"abstract": [
"We present a new technique, failure-oblivious computing, that enables servers to execute through memory errors without memory corruption. Our safe compiler for C inserts checks that dynamically detect invalid memory accesses. Instead of terminating or throwing an exception, the generated code simply discards invalid writes and manufactures values to return for invalid reads, enabling the server to continue its normal execution path. We have applied failure-oblivious computing to a set of widely-used servers from the Linux-based open-source computing environment. Our results show that our techniques 1) make these servers invulnerable to known security attacks that exploit memory errors, and 2) enable the servers to continue to operate successfully to service legitimate requests and satisfy the needs of their users even after attacks trigger their memory errors. We observed several reasons for this successful continued execution. When the memory errors occur in irrelevant computations, failure-oblivious computing enables the server to execute through the memory errors to continue on to execute the relevant computation. Even when the memory errors occur in relevant computations, failure-oblivious computing converts requests that trigger unanticipated and dangerous execution paths into anticipated invalid inputs, which the error-handling logic in the server rejects. Because servers tend to have small error propagation distances (localized errors in the computation for one request tend to have little or no effect on the computations for subsequent requests), redirecting reads that would otherwise cause addressing errors and discarding writes that would otherwise corrupt critical data structures (such as the call stack) localizes the effect of the memory errors, prevents addressing exceptions from terminating the computation, and enables the server to continue on to successfully process subsequent requests. The overall result is a substantial extension of the range of requests that the server can successfully process."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | @cite_8 proposes ClearView, a system for automatically repairing errors in production. The system consists of monitoring the system execution on low-level registers to learn invariants. Those invariants are then monitored, and if a violation of an invariant is detected ClearView forces the restoration. From an engineering perspective, the difference is reasons on decision sequences, while ClearView analizes each decision in isolation. From a scientific perspective, our work finely characterizes of the search space and the outcomes of runtime repair based on execution modification. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2099866050"
],
"abstract": [
"We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to learn invariants thatcharacterize the application's normal behavior, (2) uses error detectors to distinguish normal executions from erroneous executions, (3) identifies violations of learned invariants that occur during erroneous executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch. ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly appropriate for this context include its ability to generate patches without human intervention, apply and remove patchesto and from running applications without requiring restarts or otherwise perturbing the execution, and identify and discard ineffective or damaging patches by evaluating the continued behavior of patched applications. ClearView was evaluated in a Red Team exercise designed to test its ability to successfully survive attacks that exploit security vulnerabilities. A hostile external Red Team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView detected and blocked all of the attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs. Finally, the Red Team attempted to make Clear-View apply an undesirable patch, but ClearView's patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | @cite_2 presents a system to defend against deadlocks at runtime. The system first detects synchronization patterns of deadlocks, and when the pattern is detected, the system avoids re-occurrences of the deadlock with additional locks. The pattern detection is related to the detector of instances of the fault model under consideration. However, do not explore and compare alternative locking strategies. We note that the code algorithm of may be plugged on top of their systems to explore the search space of locking sequences. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1591458180"
],
"abstract": [
"Deadlock immunity is a property by which programs, once afflicted by a given deadlock, develop resistance against future occurrences of that and similar deadlocks. We describe a technique that enables programs to automatically gain such immunity without assistance from programmers or users. We implemented the technique for both Java and POSIX threads and evaluated it with several real systems, including MySQL, JBoss, SQLite, Apache ActiveMQ, Limewire, and Java JDK. The results demonstrate effectiveness against real deadlock bugs, while incurring modest performance overhead and scaling to 1024 threads. We therefore conclude that deadlock immunity offers programmers and users an attractive tool for coping with elusive deadlocks."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | Assure @cite_19 is a self-healing system based on checkpointing and error virtualization. Error virtualization consists of handling an unknown and unrecoverable error with error handling code that is already present in the system yet designed for handling other errors. While Assure does runtime repair by opportunistic reuse of already present recovery code, handles failures by modifying the state or flow according to a runtime patch model. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2152475836"
],
"abstract": [
"Software failures in server applications are a significant problem for preserving system availability. We present ASSURE, a system that introduces rescue points that recover software from unknown faults while maintaining both system integrity and availability, by mimicking system behavior under known error conditions. Rescue points are locations in existing application code for handling a given set of programmer-anticipated failures, which are automatically repurposed and tested for safely enabling fault recovery from a larger class of (unanticipated) faults. When a fault occurs at an arbitrary location in the program, ASSURE restores execution to an appropriate rescue point and induces the program to recover execution by virtualizing the program's existing error-handling facilities. Rescue points are identified using fuzzing, implemented using a fast coordinated checkpoint-restart mechanism that handles multi-process and multi-threaded applications, and, after testing, are injected into production code using binary patching. We have implemented an ASSURE Linux prototype that operates without application source code and without base operating system kernel changes. Our experimental results on a set of real-world server applications and bugs show that ASSURE enabled recovery for all of the bugs tested with fast recovery times, has modest performance overhead, and provides automatic self-healing orders of magnitude faster than current human-driven patch deployment methods."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | @cite_27 repair web applications at runtime with set of manually written, API-specific alternatives rules. This set can be seen as a hardcoded set of runtime patches. On the contrary, does not require a list of alternatives but instead relies on an abstract runtime patch model that is automatically instantiated at runtime. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2007777090"
],
"abstract": [
"We present a technique that finds and executes workarounds for faulty Web applications automatically and at runtime. Automatic workarounds exploit the inherent redundancy of Web applications, whereby a functionality of the application can be obtained through different sequences of invocations of Web APIs. In general, runtime workarounds are applied in response to a failure, and require that the application remain in a consistent state before and after the execution of a workaround. Therefore, they are ideally suited for interactive Web applications, since those allow the user to act as a failure detector with minimal effort, and also either use read-only state or manage their state through a transactional data store. In this paper we focus on faults found in the access libraries of widely used Web applications such as Google Maps. We start by classifying a number of reported faults of the Google Maps and YouTube APIs that have known workarounds. From those we derive a number of general and API-specific program-rewriting rules, which we then apply to other faults for which no workaround is known. Our experiments show that workarounds can be readily deployed within Web applications, through a simple client-side plug-in, and that program-rewriting rules derived from elementary properties of a common library can be effective in finding valid and previously unknown workarounds."
]
} |
1603.07631 | 2306262874 | We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification. | Berger and Zorn @cite_16 show that is possible to effectively tolerate memory errors and provide probabilistic memory safety by randomizing the memory allocation and providing memory replication. The work by @cite_23 exploits a specific hardware feature called ECC-memory for detecting illegal memory accesses at runtime. The idea of the paper is to use the consistency checks of the ECC-memory to detect illegal memory accesses (for instance due to buffer overflow). Both techniques are semantically equivalent in the normal case. On the contrary is meant to reason about the search space of execution modifications that are not semantically equivalent, where one taken decision can impact the rest of the computation. | {
"cite_N": [
"@cite_16",
"@cite_23"
],
"mid": [
"2136938453",
"2098809490"
],
"abstract": [
"Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.",
"Memory leaks and memory corruption are two major forms of software bugs that severely threaten system availability and security. According to the US-CERT vulnerability notes database, 68 of all reported vulnerabilities in 2003 were caused by memory leaks or memory corruption. Dynamic monitoring tools, such as the state-of-the-art Purify, are commonly used to detect memory leaks and memory corruption. However, most of these tools suffer from high overhead, with up to a 20 times slowdown, making them infeasible to be used for production-runs. This paper proposes a tool called SafeMem to detect memory leaks and memory corruption on-the-fly during production-runs. This tool does not rely on any new hardware support. Instead, it makes a novel use of existing ECC memory technology and exploits intelligent dynamic memory usage behavior analysis to detect memory leaks and corruption. We have evaluated SafeMem with seven real-world applications that contain memory leak or memory corruption bugs. SafeMem detects all tested bugs with low overhead (only 1.6 -14.4 ), 2-3 orders of magnitudes smaller than Purify. Our results also show that ECC-protection is effective in pruning false positives for memory leak detection, and in reducing the amount of memory waste (by a factor of 64-74) used for memory monitoring in memory corruption detection compared to page-protection."
]
} |
1603.07810 | 2606316830 | What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately. | Similarity based learning has emerged as a broad field of interest in modern computer vision and has been used in many contexts. Disconnected from the input image, triplet based similarity embeddings, can be learned using crowd-kernels @cite_3 . Further, @cite_15 introduce a probabilistic treatment for triplets and learn an adaptive crowd kernel. Similar work has been generalized to multiple-views and clustering settings by Amid and Ukkonen @cite_5 as well as Van der Maaten and Hinton @cite_26 . A combination of triplet embeddings with input kernels was presented by @cite_16 , but this work did not include joint feature and embedding learning. An early approach to connect input features with embeddings has been to learn image similarity functions through ranking @cite_6 . | {
"cite_N": [
"@cite_26",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_16"
],
"mid": [
"2149489931",
"2088247287",
"1532499126",
"1859119539",
"2951342632",
"2951944936"
],
"abstract": [
"Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize \"central\" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.",
"This paper considers the problem of learning an embedding of data based on similarity triplets of the form “A is more similar to B than to C”. This learning setting is of relevance to scenarios in which we wish to model human judgements on the similarity of objects. We argue that in order to obtain a truthful embedding of the underlying data, it is insufficient for the embedding to satisfy the constraints encoded by the similarity triplets. In particular, we introduce a new technique called t-Distributed Stochastic Triplet Embedding (t-STE) that collapses similar points and repels dissimilar points in the embedding — even when all triplet constraints are satisfied. Our experimental evaluation on three data sets shows that as a result, t-STE is much better than existing techniques at revealing the underlying data structure.",
"Learning a measure of similarity between pairs of objects is an important generic problem in machine learning. It is particularly useful in large scale applications like searching for an image that is similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are not only visually similar but also semantically related to a given object. Unfortunately, the approaches that exist today for learning such semantic similarity do not scale to large data sets. This is both because typically their CPU and storage requirements grow quadratically with the sample size, and because many methods impose complex positivity constraints on the space of learned similarity functions. The current paper presents OASIS, an Online Algorithm for Scalable Image Similarity learning that learns a bilinear similarity measure over sparse representations. OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost. Our experiments show that OASIS is both fast and accurate at a wide range of scales: for a data set with thousands of images, it achieves better results than existing state-of-the-art methods, while being an order of magnitude faster. For large, web scale, data sets, OASIS can be trained on more than two million images from 150K text queries within 3 days on a single CPU. On this large scale data set, human evaluations showed that 35 of the ten nearest neighbors of a given test image, as found by OASIS, were semantically relevant to that image. This suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before.",
"For humans, it is usually easier to make statements about the similarity of objects in relative, rather than absolute terms. Moreover, subjective comparisons of objects can be based on a number of different and independent attributes. For example, objects can be compared based on their shape, color, etc. In this paper, we consider the problem of uncovering these hidden attributes given a set of relative distance judgments in the form of triplets. The attribute that was used to generate a particular triplet in this set is unknown. Such data occurs, e.g., in crowdsourcing applications where the triplets are collected from a large group of workers. We propose the Multiview Triplet Embedding (MVTE) algorithm that produces a number of low-dimensional maps, each corresponding to one of the hidden attributes. The method can be used to assess how many different attributes were used to create the triplets, as well as to assess the difficulty of a distance comparison task, and find objects that have multiple interpretations in relation to the other objects.",
"We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form \"is object 'a' more similar to 'b' or to 'c'?\" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the \"crowd kernel.\" SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as \"is striped\" among neckties and \"vowel vs. consonant\" among letters.",
"This paper presents our work on \"SNaCK,\" a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods."
]
} |
1603.07810 | 2606316830 | What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately. | A foundational line of work combining similarities with neural network models to learn visual features from similarities revolves around Siamese networks @cite_8 @cite_21 , which use pairwise distances to learn embeddings discriminatively. In contrast to pairwise comparisons, triplets have a key advantage due to their flexibility in capturing a variety of higher-order similarity constraints rather than the binary similar dissimilar statement for pairs. Neural networks to learn visual features from triplet based similarities have been used by @cite_22 and @cite_17 for face verification and fine-grained visual categorization. A key insight from these works is that semantics as captured by triplet embeddings are a natural way to represent complex class-structures when dealing with problems of high-dimensional categorization and greatly boost the ability of models to share information between classes. | {
"cite_N": [
"@cite_17",
"@cite_21",
"@cite_22",
"@cite_8"
],
"mid": [
"2096733369",
"2138621090",
"1975517671",
"2157364932"
],
"abstract": [
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
"Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.",
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.",
"We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves."
]
} |
1603.07810 | 2606316830 | What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately. | Disentangling representations is a major topic in the recent machine learning literature and has for example been tackled using Boltzmann Machines by @cite_25 . @cite_27 propose information theoretical factorizations to improve unsupervised adversarial networks. Within this stream of research, the work closest to ours is that of @cite_11 on representation learning which introduces a joint generative model over inputs and triplets to learn a factorized latent space. However, the focus of that work is the generative aspect of disentangling representations and proof of concept applications to low-dimensional data. Our work introduces a convolutional embedding architecture that forgoes the generative pathway in favor of exploring applications to embed high-dimensional image data. We thus demonstrate that the generative interpretation is not required to reap the benefits of Conditional Similarity Networks and demonstrate in particular their use in common computer vision tasks. | {
"cite_N": [
"@cite_27",
"@cite_25",
"@cite_11"
],
"mid": [
"2434741482",
"2157617585",
"2261396597"
],
"abstract": [
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"Many latent factors of variation interact to generate sensory data; for example, pose, morphology and expression in face images. In this work, we propose to learn manifold coordinates for the relevant factors of variation and to model their joint interaction. Many existing feature learning algorithms focus on a single task and extract features that are sensitive to the task-relevant factors and invariant to all others. However, models that just extract a single set of invariant features do not exploit the relationships among the latent factors. To address this, we propose a higher-order Boltzmann machine that incorporates multiplicative interactions among groups of hidden units that each learn to encode a distinct factor of variation. Furthermore, we propose correspondence-based training strategies that allow effective disentangling. Our model achieves state-of-the-art emotion recognition and face verification performance on the Toronto Face Database. We also demonstrate disentangled features learned on the CMU Multi-PIE dataset.",
"Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy. Recently, high-dimensional parametric models like neural networks have succeeded in building rich representations using either compressive, reconstructive or supervised criteria. However, the semantic structure inherent in observations is oftentimes lost in the process. Human perception excels at understanding semantics but cannot always be expressed in terms of labels. Thus, or , for example crowdsourcing, are often employed to generate similarity constraints using an implicit similarity function encoded in human perception. In this work we propose to combine with a in order to transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian latent factor models of the observations. We use a fast variational algorithm to learn the joint model and demonstrate applicability to a well-known image dataset. We show how implicit triplet information can provide rich information to learn representations that outperform previous metric learning approaches as well as generative models without this side-information in a variety of predictive tasks. In addition, we illustrate that the proposed approach compartmentalizes the latent spaces semantically which allows interpretation of the latent variables."
]
} |
1603.07810 | 2606316830 | What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately. | We also discuss the notion of attention in our work, by employing gates to attend to the mentioned subspaces of the inferred embeddings when focusing on particular visual tasks. This term may be confused with spatial attention such as used in the DRAW model @cite_14 , but bears similarity insofar as it shows that the ability to gate the focus of the model on relevant dimensions (in our case in latent space rather than observed space) is beneficial both to the semantics and to the quantitative performance of our model. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1850742715"
],
"abstract": [
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."
]
} |
1603.07609 | 2950314916 | This work examines the impact of cross-linguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate that language specific error distributions in ESL writing can be predicted from the typological properties of the native language and their relation to the typology of English. Our typology driven model enables to obtain accurate estimates of such distributions without access to any ESL data for the target languages. Furthermore, we present a strategy for adjusting our method to low-resource languages that lack typological documentation using a bootstrapping approach which approximates native language typology from ESL texts. Finally, we show that our framework is instrumental for linguistic inquiry seeking to identify first language factors that contribute to a wide range of difficulties in second language acquisition. | Previous work on grammatical error correction that examined determiner and preposition errors @cite_3 @cite_8 incorporated native language specific priors in models that are otherwise trained on standard English text. Our work extends the native language tailored treatment of grammatical errors to a much larger set of error types. More importantly, this approach is limited by the availability of manual error annotations for the target language in order to obtain the required error counts. Our framework enables to bypass this annotation bottleneck by predicting language specific priors from typological information. | {
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"2101196113",
"2238150896"
],
"abstract": [
"We consider the problem of correcting errors made by English as a Second Language (ESL) writers and address two issues that are essential to making progress in ESL error correction - algorithm selection and model adaptation to the first language of the ESL learner. A variety of learning algorithms have been applied to correct ESL mistakes, but often comparisons were made between incomparable data sets. We conduct an extensive, fair comparison of four popular learning methods for the task, reversing conclusions from earlier evaluations. Our results hold for different training sets, genres, and feature sets. A second key issue in ESL error correction is the adaptation of a model to the first language of the writer. Errors made by non-native speakers exhibit certain regularities and, as we show, models perform much better when they use knowledge about error patterns of the non-native writers. We propose a novel way to adapt a learned algorithm to the first language of the writer that is both cheaper to implement and performs better than other adaptation methods.",
"This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance."
]
} |
1603.07415 | 2304651290 | Modern deep neural network based object detection methods typically classify candidate proposals using their interior features. However, global and local surrounding contexts that are believed to be valuable for object detection are not fully exploited by existing methods yet. In this work, we take a step towards understanding what is a robust practice to extract and utilize contextual information to facilitate object detection in practice. Specifically, we consider the following two questions: "how to identify useful global contextual information for detecting a certain object?" and "how to exploit local context surrounding a proposal for better inferring its contents?". We provide preliminary answers to these questions through developing a novel Attention to Context Convolution Neural Network (AC-CNN) based object detection model. AC-CNN effectively incorporates global and local contextual information into the region-based CNN (e.g. Fast RCNN) detection model and provides better object detection performance. It consists of one attention-based global contextualized (AGC) sub-network and one multi-scale local contextualized (MLC) sub-network. To capture global context, the AGC sub-network recurrently generates an attention map for an input image to highlight useful global contextual locations, through multiple stacked Long Short-Term Memory (LSTM) layers. For capturing surrounding local context, the MLC sub-network exploits both the inside and outside contextual information of each specific proposal at multiple scales. The global and local context are then fused together for making the final decision for detection. Extensive experiments on PASCAL VOC 2007 and VOC 2012 well demonstrate the superiority of the proposed AC-CNN over well-established baselines. In particular, AC-CNN outperforms the popular Fast-RCNN by 2.0 and 2.2 on VOC 2007 and VOC 2012 in terms of mAP, respectively. | Recently, LSTM has shown outstanding performance for the tasks of image captioning @cite_26 , video description @cite_20 @cite_14 , people detection @cite_21 and action recognition @cite_30 , benefiting from its excellent ability to model long-range information. Most of those existing works tend to adopt a CNN accompanied with several LSTMs to address specific visual recognition problems. Specifically, Sharma al @cite_30 proposed a soft visual attention model based on LSTMs for action recognition. Yao al @cite_20 proposed to use CNN features and an LSTM decoder to generate video descriptions. Stewart al @cite_21 employed a recurrent LSTM layer for people detection. In this work, we offer the first research attempt to apply LSTM to learning the useful global contextual information with guidance from annotated class labels. Feature cubes of the entire image are taken as the input to a recurrent model consisting of multiple LSTM layers. With the recurrent model, some contextual slices beneficial for the detection task are iteratively highlighted to provide powerful feature representations for object detection. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_21",
"@cite_20"
],
"mid": [
"2172806452",
"2136036867",
"2950178297",
"2951132567",
"2950307714"
],
"abstract": [
"We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.",
"Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as non-maximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes.",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions."
]
} |
1603.07371 | 2307626824 | In technology mapping, enumeration of subcircuits or cuts to be replaced by a standard cell is an important step that decides both the quality of the solution and execution speed. In this work, we view cuts as set of edges instead of as set of nodes and based on it, provide a classification of cuts. It is shown that if enumeration is restricted to a subclass of cuts called unidirectional cuts, the quality of solution does not degrade. We also show that such cuts are equivalent to a known class of cuts called strong line cuts first proposed in [14]. We propose an efficient enumeration method based on a novel graph pruning algorithm that utilizes network flow to approximate minimum strong line cut. The runtimes for the proposed enumeration method are shown to be quite practical for enumeration of a large number of cuts. | The importance of cut computation in TM for FPGAs was first identified by the Cong @cite_24 . They developed a novel and elegant network flow based algorithm that directly identified a single, depth-optimal, @math -feasible node cut, without enumerating cuts. Later, Pan @cite_3 @cite_15 developed an efficient algorithm for enumerating cuts that avoided the large computational requirements of network flow. More recently, Ling @cite_18 developed a novel scheme for implicitly encoding cuts using Binary Decision Diagrams (BDD). This representation allowed for extraction of cuts when the value of a cut could be computed recursively. However, the authors admit that BDD approach is not very well suited for cut enumeration since non-cuts, which dominate cuts, are also implicitly included and need to be pruned during enumeration. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_18",
"@cite_3"
],
"mid": [
"2105715355",
"2013845995",
"2060526835",
"2025617772"
],
"abstract": [
"The field programmable gate-array (FPGA) has become an important technology in VLSI ASIC designs. In the past few years, a number of heuristic algorithms have been proposed for technology mapping in lookup-table (LUT) based FPGA designs, but none of them guarantees optimal solutions for general Boolean networks and little is known about how far their solutions are away from the optimal ones. This paper presents a theoretical breakthrough which shows that the LUT-based FPGA technology mapping problem for depth minimization can be solved optimally in polynomial time. A key step in our algorithm is to compute a minimum height K-feasible cut in a network, which is solved optimally in polynomial time based on network flow computation. Our algorithm also effectively minimizes the number of LUT's by maximizing the volume of each cut and by several post-processing operations. Based on these results, we have implemented an LUT-based FPGA mapping package called FlowMap. We have tested FlowMap on a large set of benchmark examples and compared it with other LUT-based FPGA mapping algorithms for delay optimization, including Chortle-d, MIS-pga-delay, and DAG-Map. FlowMap reduces the LUT network depth by up to 7 and reduces the number of LUT's by up to 50 compared to the three previous methods. >",
"We study the technology mapping problem for sequential circuits for look-up table (LUT) based field programmable gate arrays (FPGAs). Existing approaches to the problem simply remove the flip-flops (FFs), then map the remaining combinational logic, and finally put the FFs back. These approaches ignore the sequential nature of a circuit and assume the positions of the FFs are fixed. However, FFs in a sequential circuit can be reposistioned by a functionality-preserving transformation called retiming. As a result, existing approaches can only consider a very small portion of the available solution space. We propose in this paper a novel approach to the technology mapping problem. In our approach, retiming is integrated into the technology mapping process so as to consider the full solution space. We then present a polynomial technology mapping algorithm that, for a given circuit, produces a mapping solution with the minimum clock period among all possible ways of retiming. The effectiveness of the algorithm is also demonstrated experimentally.",
"While the covering algorithm has been perfected recently by the iterative approaches, such as DAOmap and IMap, its application has been limited to technology mapping. The main factor preventing the covering problem's migration to other logic transformations, such as elimination and resynthesis region identification found in SIS and FBDD, is the exponential number of alternative cuts that have to be evaluated. Traditional methods of cut generation do not scale beyond a cut size of 6. In this paper, a symbolic method that can enumerate all cuts is proposed without any pruning, up to a cut size of 10. We show that it can outperform traditional methods by an order of magnitude and, as a result, scales to 100K gate benchmarks. As a practical driver, the covering problem applied to elimination is shown where it can not only produce competitive area, but also provide more than 6times average runtime reduction of the total runtime in FBDD, a BDD based logic synthesis tool with a reported order of magnitude faster runtime than SIS and commercial tools with negligible impact on area.",
"In this paper, w e presen t a new retiming-based technology mapping algorithm for look-up table-based field programmable gate arrays. The algorithm is based on a novel iterative procedure for computing all k -cuts of all nodes in a sequen tialcircuit, in the presence of retiming. The algorithm completely avoids flow computation whic his the bottleneck of previous algorithms. Due to the fact that k is very small in practice, the procedure for computing all k -cuts is v ery fast. Experimental results indicate the overall algorithm is very efficient in practice."
]
} |
1603.07371 | 2307626824 | In technology mapping, enumeration of subcircuits or cuts to be replaced by a standard cell is an important step that decides both the quality of the solution and execution speed. In this work, we view cuts as set of edges instead of as set of nodes and based on it, provide a classification of cuts. It is shown that if enumeration is restricted to a subclass of cuts called unidirectional cuts, the quality of solution does not degrade. We also show that such cuts are equivalent to a known class of cuts called strong line cuts first proposed in [14]. We propose an efficient enumeration method based on a novel graph pruning algorithm that utilizes network flow to approximate minimum strong line cut. The runtimes for the proposed enumeration method are shown to be quite practical for enumeration of a large number of cuts. | All the works identified above, and many others have demonstrated that , the core of which involves cut enumeration, leads to far superior solutions than the traditional graph tree matching based algorithms. Cut enumeration has also found uses in related applications such as re-synthesis through rewriting @cite_12 , application specific instruction set extension generation optimization @cite_1 , hardware software co-design @cite_11 , model checking in verification @cite_14 , and SAT problem preprocessing for simplification @cite_22 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_1",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2126662120",
"2157003622",
"2120624822",
"2125778714"
],
"abstract": [
"",
"SAT solvers are often challenged with very hard problems that remain unsolved after hours of CPU time. The research community meets the challenge in two ways: (1) by improving the SAT solver technology, for example, perfecting heuristics for variable ordering, and (2) by inventing new ways of constructing simpler SAT problems, either using domain specific information during the translation from the original problem to CNF, or by applying a more universal CNF simplification procedure after the translation. This paper explores preprocessing of circuit-based SAT problems using recent advances in logic synthesis. Two fast logic synthesis techniques are considered: DAG-aware logic minimization and a novel type of structural technology mapping, which reduces the size of the CNF derived from the circuit. These techniques are experimentally compared to CNF-based preprocessing. The conclusion is that the proposed techniques are complementary to CNF-based preprocessing and speedup SAT solving substantially on industrial examples.",
"Many commercially available embedded processors are capable of extending their base instruction set for a specific domain of applications. While steady progress has been made in the tools and methodologies of automatic instruction set extension for configurable processors, the limited data bandwidth available in the core processor (e.g., the number of simultaneous accesses to the register file) becomes a potential performance bottleneck. In this paper, we first present a quantitative analysis of the data bandwidth limitation in configurable processors, and then propose a novel low-cost architectural extension and associated compilation techniques to address the problem. Specifically, we embed a single control bit in the instruction op-codes to selectively copy the execution results to a set of hash-mapped shadow registers in the write-back stage. This can efficiently reduce the communication overhead due to data transfers between the core processor and the custom logic. We also present a novel simultaneous global shadow register binding with a hash function generation algorithm to take full advantage of the extension. The application of our approach leads to a nearly optimal performance speedup",
"We describe an optimization method for combinational and sequential logic networks, with emphasis on scalability and the scope of optimization. The proposed resynthesis (a) is capable of substantial logic restructuring, (b) is customizable to solve a variety of optimization tasks, and (c) has reasonable runtime on industrial designs. The approach uses don't cares computed for a window surrounding a node and can take into account external don't cares (e.g. unreachable states). It uses a SAT solver and interpolation to find a new representation for a node. This representation can be in terms of inputs from other nodes in the window thus effecting Boolean re-substitution. Experimental results on 6-input LUT networks after high effort synthesis show substantial reductions in area and delay. When applied to 20 large academic benchmarks, the LUT count and logic level is reduced by 45.0 and 12.2 , respectively. The longest runtime for synthesis and mapping is about two minutes. When applied to a set of 14 industrial benchmarks ranging up to 83K 6-LUTs, the LUT count and logic level is reduced by 11.8 and 16.5 , respectively. Experimental results on 6-input LUT networks after high-effort synthesis show substantial reductions in area and delay. The longest runtime is about 30 minutes.",
"This paper presents an RTL generation scheme for a SimpleScalar PISA instruction set architecture with system calls to implement C programs. The scheme utilizes ASIPmeister, a processor generation tool. The RTL generated is available for download. The second part of the paper shows a method of reducing the PISA instruction set and generating a processor for a given application. This reduction and generation can be performed within an hour, making this one of the fastest methods of generating an application specific processor. For five benchmark applications, we show that on average, processor size can be reduced by 30 , energy consumed reduced by 24 , and performance improved by 24 ."
]
} |
1603.07371 | 2307626824 | In technology mapping, enumeration of subcircuits or cuts to be replaced by a standard cell is an important step that decides both the quality of the solution and execution speed. In this work, we view cuts as set of edges instead of as set of nodes and based on it, provide a classification of cuts. It is shown that if enumeration is restricted to a subclass of cuts called unidirectional cuts, the quality of solution does not degrade. We also show that such cuts are equivalent to a known class of cuts called strong line cuts first proposed in [14]. We propose an efficient enumeration method based on a novel graph pruning algorithm that utilizes network flow to approximate minimum strong line cut. The runtimes for the proposed enumeration method are shown to be quite practical for enumeration of a large number of cuts. | The use of a line dependency graph (LDG) derived from a DAG was proposed by Kagaris @cite_6 to compute the maximum strong cut in a circuit for the purpose of delay testing. Based on the observation that an LDG is a transitively-oriented graph, hence a @cite_5 , they provide an efficient and elegant algorithm that computes a maximum independent set of the LDG using network flow. This set represents a maximum strong cut in the corresponding DAG. While their approach generated interest in the area of delay-testing, we will demonstrate that there is still greater opportunity for further exploration and exploitation of the DAG LDG duality for strong cut enumeration. | {
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2143474538",
"1969257177"
],
"abstract": [
"Algorithmic Graph Theory and Perfect Graphs, first published in 1980, has become the classic introduction to the field. This new Annals edition continues to convey the message that intersection graph models are a necessary and important tool for solving real-world problems. It remains a stepping stone from which the reader may embark on one of many fascinating research trails. The past twenty years have been an amazingly fruitful period of research in algorithmic graph theory and structured families of graphs. Especially important have been the theory and applications of new intersection graph models such as generalizations of permutation graphs and interval graphs. These have lead to new families of perfect graphs and many algorithmic results. These are surveyed in the new Epilogue chapter in this second edition. New edition of the \"Classic\" book on the topic Wonderful introduction to a rich research area Leading author in the field of algorithmic graph theory Beautifully written for the new mathematician or computer scientist Comprehensive treatment",
"We present a polynomial-time algorithm that finds the maximum weighted independent set of a transitive graph. The studied problem finds applications in a variety of VLSI contexts, including path delay fault testing, scheduling in high-level synthesis, and channel routing in physical design automation. The algorithm has been implemented and incorporated in a CAD tool for path delay fault testing. We experimentally verify its impact in the latter context."
]
} |
1603.07063 | 2951729963 | By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions. | There has been increasing research interest on the semantic object parsing problem including the general object parsing @cite_4 @cite_29 @cite_7 @cite_35 @cite_42 , person part segmentation @cite_18 @cite_38 and human parsing @cite_26 @cite_0 @cite_23 @cite_6 @cite_12 @cite_19 @cite_27 . To capture the rich structure information based on the advanced CNN architecture, one common way is the combination of CNNs and CRFs @cite_3 @cite_30 @cite_20 @cite_29 , where the CNN outputs are treated as unary potentials while CRF further incorporates pairwise or higher order factors. Instead of learning features only from local convolutional kernels as in these previous methods, we incorporate the global context by the novel Graph LSTM structure to capture long-distance dependencies on the superpixels. The dependency field of Graph LSTM can effectively cover the entire image context. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_38",
"@cite_4",
"@cite_26",
"@cite_7",
"@cite_29",
"@cite_42",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"",
"2158865742",
"2346977708",
"1903370114",
"2074621908",
"2020206277",
"792160549",
"1948751323",
"2094311777",
"1923697677",
"2121339428",
"1962739028",
"2204578866",
"2010785631",
"2102492119",
"1195044660"
],
"abstract": [
"",
"",
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.",
"Parsing articulated objects, e.g. humans and animals, into semantic parts (e.g. body, head and arms, etc.) from natural images is a challenging and fundamental problem for computer vision. A big difficulty is the large variability of scale and location for objects and their corresponding parts. Even limited mistakes in estimating scale and location will degrade the parsing output and cause errors in boundary details. To tackle these difficulties, we propose a \"Hierarchical Auto-Zoom Net\" (HAZN) for object part parsing which adapts to the local scales of objects and parts. HAZN is a sequence of two \"Auto-Zoom Net\" (AZNs), each employing fully convolutional networks that perform two tasks: (1) predict the locations and scales of object instances (the first AZN) or their parts (the second AZN); (2) estimate the part scores for predicted object instance or part regions. Our model can adaptively \"zoom\" (resize) predicted image regions into their proper scales to refine the parsing. We conduct extensive experiments over the PASCAL part datasets on humans, horses, and cows. For humans, our approach significantly outperforms the state-of-the-arts by 5 mIOU and is especially better at segmenting small instances and small parts. We obtain similar improvements for parsing cows and horses over alternative methods. In summary, our strategy of first zooming into objects and then zooming into parts is very effective. It also enables us to process different regions of the image at different scales adaptively so that, for example, we do not need to waste computational resources scaling the entire image.",
"In this paper, we study the problem of semantic part segmentation for animals. This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often have similar appearance and highly varying shapes. To tackle these challenges, we build a mixture of compositional models to represent the object boundary and the boundaries of semantic parts. And we incorporate edge, appearance, and semantic part cues into the compositional model. Given part-level segmentation annotation, we develop a novel algorithm to learn a mixture of compositional models under various poses and viewpoints for certain animal classes. Furthermore, a linear complexity algorithm is offered for efficient inference of the compositional model using dynamic programming. We evaluate our method for horse and cow using a newly annotated dataset on Pascal VOC 2010 which has pixelwise part labels. Experimental results demonstrate the effectiveness of our method.",
"In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval.",
"This paper addresses the problem of semantic part parsing (segmentation) of cars, i.e.assigning every pixel within the car to one of the parts (e.g.body, window, lights, license plates and wheels). We formulate this as a landmark identification problem, where a set of landmarks specifies the boundaries of the parts. A novel mixture of graphical models is proposed, which dynamically couples the landmarks to a hierarchy of segments. When modeling pairwise relation between landmarks, this coupling enables our model to exploit the local image contents in addition to spatial deformation, an aspect that most existing graphical models ignore. In particular, our model enforces appearance consistency between segments within the same part. Parsing the car, including finding the optimal coupling between landmarks and segments in the hierarchy, is performed by dynamic programming. We evaluate our method on a subset of PASCAL VOC 2010 car images and on the car subset of 3D Object Category dataset (CAR3D). We show good results and, in particular, quantify the effectiveness of using the segment appearance consistency in terms of accuracy of part localization and segmentation.",
"Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-channel fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.",
"Clothing is one of the most informative cues of human appearance. In this paper, we propose a novel multi-person clothing segmentation algorithm for highly occluded images. The key idea is combining blocking models to address the person-wise occlusions. In contrary to the traditional layered model that tries to solve the full layer ranking problem, the proposed blocking model partitions the problem into a series of pair-wise ones and then determines the local blocking relationship based on individual and contextual information. Thus, it is capable of dealing with cases with a large number of people. Additionally, we propose a layout model formulated as Markov Network which incorporates the blocking relationship to pursue an approximately optimal clothing layout for group people. Experiments demonstrated on a group images dataset show the effectiveness of our algorithm.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN [12] in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts [29, 30], for the human parsing task.",
"In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic structure and the local fine details within the cross-layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN architecture over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [15] reaches 76.95 by Co-CNN, significantly higher than 62.81 and 64.38 by the state-of-the-art algorithms, M-CNN [21] and ATR [15], respectively.",
"In this work, we address the problem of human parsing, namely partitioning the human body into semantic regions, by using the novel Parselet representation. Previous works often consider solving the problem of human pose estimation as the prerequisite of human parsing. We argue that these approaches cannot obtain optimal pixel level parsing due to the inconsistent targets between these tasks. In this paper, we propose to use Parselets as the building blocks of our parsing model. Parselets are a group of parsable segments which can generally be obtained by low-level over-segmentation algorithms and bear strong semantic meaning. We then build a Deformable Mixture Parsing Model (DMPM) for human parsing to simultaneously handle the deformation and multi-modalities of Parselets. The proposed model has two unique characteristics: (1) the possible numerous modalities of Parse let ensembles are exhibited as the And-Or\" structure of sub-trees, (2) to further solve the practical problem of Parselet occlusion or absence, we directly model the visibility property at some leaf nodes. The DMPM thus directly solves the problem of human parsing by searching for the best graph configuration from a pool of Parse let hypotheses without intermediate tasks. Comprehensive evaluations demonstrate the encouraging performance of the proposed approach.",
"Convolutional neural networks with many layers have recently been shown to achieve excellent results on many high-level tasks such as image classification, object detection and more recently also semantic segmentation. Particularly for semantic segmentation, a two-stage procedure is often employed. Hereby, convolutional networks are trained to provide good local pixel-wise features for the second step being traditionally a more global graphical model. In this work we unify this two-stage process into a single joint training algorithm. We demonstrate our method on the semantic image segmentation task and show encouraging results on the challenging PASCAL VOC 2012 dataset.",
"In this paper we tackle the problem of clothing parsing: Our goal is to segment and classify different garments a person is wearing. We frame the problem as the one of inference in a pose-aware Conditional Random Field (CRF) which exploits appearance, figure ground segmentation, shape and location priors for each garment as well as similarities between segments, and symmetries between different human body parts. We demonstrate the effectiveness of our approach on the Fashionista dataset [1] and show that we can obtain a significant improvement over the state-of-the-art."
]
} |
1603.07080 | 2952796274 | With the fast growing demand of location-based services in indoor environments, indoor positioning based on fingerprinting has attracted a lot of interest due to its high accuracy. In this paper, we present a novel deep learning based indoor fingerprinting system using Channel State Information (CSI), which is termed DeepFi. Based on three hypotheses on CSI, the DeepFi system architecture includes an off-line training phase and an on-line localization phase. In the off-line training phase, deep learning is utilized to train all the weights of a deep network as fingerprints. Moreover, a greedy learning algorithm is used to train the weights layer-by-layer to reduce complexity. In the on-line localization phase, we use a probabilistic method based on the radial basis function to obtain the estimated location. Experimental results are presented to confirm that DeepFi can effectively reduce location error compared with three existing methods in two representative indoor environments. | Ranging-based localization computes distances to at least three access points and leverages geometrical models for location estimation. These schemes are mainly classified into two categories: power-based and time-based. For power-based approaches, the prevalent log-distance path loss (LDPL) model is used to estimate distances based on RSS, where some measurements are utilized to train the parameters of the LDPL model @cite_7 . For example, EZ @cite_9 is a configuration-free localization scheme, where a genetic algorithm is used for solving the RSS-distance equations. The LDPL model and truncated singular value decomposition (SVD) are used to build a RSS-distance map for localization, which is adaptive to indoor environmental dynamics @cite_7 . CSI-based ranging is proposed to overcome the instability of RSS in indoor environments. For instance, FILA exploits CSI from the PHY Layer to mitigate the multipath effect in the time-domain, and then trains the parameters of LDPL model to obtain the relationship between the effective CSI and distance @cite_13 . | {
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_7"
],
"mid": [
"2170240475",
"2005059864",
"2168821867"
],
"abstract": [
"While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.",
"Indoor positioning systems have received increasing attention for supporting location-based services in indoor environments. WiFi-based indoor localization has been attractive due to its open access and low cost properties. However, the distance estimation based on received signal strength indicator (RSSI) is easily affected by the temporal and spatial variance due to the multipath effect, which contributes to most of the estimation errors in current systems. How to eliminate such effect so as to enhance the indoor localization performance is a big challenge. In this work, we analyze this effect across the physical layer and account for the undesirable RSSI readings being reported. We explore the frequency diversity of the subcarriers in OFDM systems and propose a novel approach called FILA, which leverages the channel state information (CSI) to alleviate multipath effect at the receiver. We implement the FILA system on commercial 802.11 NICs, and then evaluate its performance in different typical indoor scenarios. The experimental results show that the accuracy and latency of distance calculation can be significantly enhanced by using CSI. Moreover, FILA can significantly improve the localization accuracy compared with the corresponding RSSI approach.",
"RSSI is known to be a fickle indicator of whether a wireless link will work, for many reasons. This greatly complicates operation because it requires testing and adaptation to find the best rate, transmit power or other parameter that is tuned to boost performance. We show that, for the first time, wireless packet delivery can be accurately predicted for commodity 802.11 NICs from only the channel measurements that they provide. Our model uses 802.11n Channel State Information measurements as input to an OFDM receiver model we develop by using the concept of effective SNR. It is simple, easy to deploy, broadly useful, and accurate. It makes packet delivery predictions for 802.11a g SISO rates and 802.11n MIMO rates, plus choices of transmit power and antennas. We report testbed experiments that show narrow transition regions ("
]
} |
1603.07234 | 2952198593 | End-to-end learning methods have achieved impressive results in many areas of computer vision. At the same time, these methods still suffer from a degradation in performance when testing on new datasets that stem from a different distribution. This is known as the domain shift effect. Recently proposed adaptation methods focus on retraining the network parameters. However, this requires access to all (labeled) source data, a large amount of (unlabeled) target data, and plenty of computational resources. In this work, we propose a lightweight alternative, that allows adapting to the target domain based on a limited number of target samples in a matter of minutes rather than hours, days or even weeks. To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective. Surprisingly, we find that already at the very first layer, domain shift effects pop up. We then propose a new domain adaptation method, where first layer convolutional filters that are badly affected by the domain shift are reconstructed based on less affected ones. This improves the performance of the deep network on various benchmark datasets. | Shallow DA So far, domain adaptation (DA) has mostly been studied in the context of image representations based on handcrafted features. Methods (see @cite_30 for a survey) tackle the problem in different ways, such as the feature augmentation scheme of @cite_17 or instance reweighting @cite_6 @cite_14 , that tries to correct the shift by re-weighting the source samples based on their similarity with the target domain. Another interesting line of work is the use of a latent feature space @cite_27 @cite_16 , that has led to the development of subspace-based DA methods @cite_28 @cite_11 @cite_15 @cite_18 @cite_19 . Especially the work of @cite_19 is worth mentioning here, as it aims at adapting a model in an online fashion, somewhat similar in spirit to our work. Most of these methods have mainly been evaluated on the Office benchmark @cite_13 that comes with precomputed SURF features. However, when applied to deep features (i.e., activations of the last layer of a pretrained convolutional neural network), that capture more high-level object information rather than edges and gradients, they do not seem as powerful as before @cite_20 @cite_31 . Therefore, more recent works use deep learning like methods to reduce the domain shift. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_11",
"@cite_31",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1982696459",
"2329664094",
"",
"2128053425",
"2953360861",
"2104068492",
"2008635359",
"2005295545",
"2115403315",
"1910772337",
"2150846797",
"1722318740",
"",
"2952362487"
],
"abstract": [
"In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field.",
"We propose a novel method for unsupervised domain adaptation. Traditional machine learning algorithms often fail to generalize to new input distributions, causing reduced accuracy. Domain adaptation attempts to compensate for the performance degradation by transferring and adapting source knowledge to target domain. Existing unsupervised methods project domains into a lower-dimensional space and attempt to align the subspace bases, effectively learning a mapping from source to target points or vice versa. However, they fail to take into account the difference of the two distributions in the subspaces, resulting in misalignment even after adaptation. We present a unified view of existing subspace mapping based methods and develop a generalized approach that also aligns the distributions as well as the subspace bases. We provide a detailed evaluation of our approach on benchmark datasets and show improved results over published approaches.",
"",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.",
"Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+, GEMEP-FERA and RU-FACS. STM outperformed generic classifiers in all.",
"We pose the following question: what happens when test data not only differs from training data, but differs from it in a continually evolving way? The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them. However, in many real-world applications, examples cannot be naturally separated into discrete domains, but arise from a continuously evolving underlying process. Examples include video with gradually changing lighting and spam email with evolving spammer tactics. We formulate a novel problem of adapting to such continuous domains, and present a solution based on smoothly varying embeddings. Recent work has shown the utility of considering discrete visual domains as fixed points embedded in a manifold of lower-dimensional subspaces. Adaptation can be achieved via transforms or kernels learned between such stationary source and target subspaces. We propose a method to consider non-stationary domains, which we refer to as Continuous Manifold Adaptation (CMA). We treat each target sample as potentially being drawn from a different subspace on the domain manifold, and present a novel technique for continuous transform-based adaptation. Our approach can learn to distinguish categories using training data collected at some point in the past, and continue to update its model of the categories for some time into the future, without receiving any additional labels. Experiments on two visual datasets demonstrate the value of our approach for several popular feature representations.",
"Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.",
"Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain. In this paper, we introduce a novel unsupervised DA approach based on both subspace alignment and selection of landmarks similarly distributed between the two domains. Those landmarks are selected so as to reduce the discrepancy between the domains and then are used to non linearly project the data in the same space where an efficient subspace alignment (in closed-form) is performed. We carry out a large experimental comparison in visual domain adaptation showing that our new method outperforms the most recent unsupervised DA approaches.",
"Sparse coding learns a set of basis functions such that each input signal can be well approximated by a linear combination of just a few of the bases. It has attracted increasing interest due to its state-of-the-art performance in BoW based image representation. However, when labeled and unlabeled images are sampled from different distributions, they may be quantized into different visual words of the codebook and encoded with different representations, which may severely degrade classification performance. In this paper, we propose a Transfer Sparse Coding (TSC) approach to construct robust sparse representations for classifying cross-distribution images accurately. Specifically, we aim to minimize the distribution divergence between the labeled and unlabeled images, and incorporate this criterion into the objective function of sparse coding to make the new representations robust to the distribution difference. Experiments show that TSC can significantly outperform state-of-the-art methods on three types of computer vision datasets.",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"",
"We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough target'' data to do slightly better than just using only source'' data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multi-domain adaptation problem, where one has data from a variety of different domains."
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | Face recognition Face recognition is one of the central problems in computer vision and, as such, work on this problem is extensive. As with many other computer vision problems, face recognition performances sky rocketed with the introduction of deep learning techniques and in particular CNNs. Though CNNs have been used for face recognition as far back as @cite_34 , only when massive amounts of data became available did their performance soar. This was originally demonstrated by the Facebook DeepFace system @cite_41 , which used an architecture not unlike the one used by @cite_34 , but with over 4 million images used for training they obtained far more impressive results. | {
"cite_N": [
"@cite_41",
"@cite_34"
],
"mid": [
"2145287260",
"2144354855"
],
"abstract": [
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer."
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | Since then, CNN based recognition systems continuously cross performance barriers with some notable examples including the Deep-ID 1-3 systems @cite_10 @cite_25 @cite_24 . They and many others since, developed and trained their systems using far fewer training images, at the cost of somewhat more elaborate network architectures. | {
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_25"
],
"mid": [
"2140609507",
"1998808035",
""
],
"abstract": [
"The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net and GoogLeNet to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53 LFW face verification accuracy and 96.0 LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end.",
"This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.",
""
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | Though novel network architecture designs can lead to better performance, further improvement can be achieved by collecting more training data. This has been demonstrated by the Google FaceNet team @cite_26 , who developed and trained their system on 200 million images. Besides improving results, they also offered a fascinating analysis of the consequences of adding more data: apparently, there is a significant diminishing returns effect when training with increasing image numbers. Thus, the leap in performance obtained by going from thousands of images to millions is substantial but increasing the numbers further provides smaller and smaller benefits. One way to explain this is that the data they and others used suffers from a long tail phenomenon @cite_32 , where most subjects in these huge datasets have very few images available for the network to learn intra-subject appearance variations from. | {
"cite_N": [
"@cite_26",
"@cite_32"
],
"mid": [
"2096733369",
"1509966554"
],
"abstract": [
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
"Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild."
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | These methods were all evaluated on the LFW dataset, which has been for some time a standard de facto for measuring face recognition performances. Many of these LFW results, however, are already reaching near-perfect performances, suggesting that LFW is no longer a challenging benchmark for today's systems. Another relevant benchmark, also frequently used to report performances, is the YouTube Faces (YTF) set @cite_14 . It contains unconstrained face videos rather than images, but it too is quickly being saturated. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2098017479"
],
"abstract": [
"Computer vision systems have demonstrated considerable improvement in recognizing and verifying faces in digital images. Still, recognizing faces appearing in unconstrained, natural conditions remains a challenging task. In this paper, we present a face-image, pair-matching approach primarily developed and tested on the “Labeled Faces in the Wild” (LFW) benchmark that reflects the challenges of face recognition from unconstrained images. The approach we propose makes the following contributions. 1) We present a family of novel face-image descriptors designed to capture statistics of local patch similarities. 2) We demonstrate how unlabeled background samples may be used to better evaluate image similarities. To this end, we describe a number of novel, effective similarity measures. 3) We show how labeled background samples, when available, may further improve classification performance, by employing a unique pair-matching pipeline. We present state-of-the-art results on the LFW pair-matching benchmarks. In addition, we show our system to be well suited for multilabel face classification (recognition) problem, on both the LFW images and on images from the laboratory controlled multi-PIE database."
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | Data augmentation Data augmentation techniques are transformations applied to the images used for training or testing, but without altering their labels. Such methods are well known to improve the performance of CNN based methods and prevent overfitting @cite_16 . These methods, however, typically involved generic image processing operations which do not exploit knowledge of the underlying problem domain to synthesize new appearance variations. | {
"cite_N": [
"@cite_16"
],
"mid": [
"1994002998"
],
"abstract": [
"The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available."
]
} |
1603.07057 | 2951552869 | Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes -- huge numbers of face images downloaded and labeled for identity -- it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images. | Face synthesis for face recognition The idea that face images can be synthetically generated in order to aid face recognition systems is not new. To our knowledge, it was originally proposed in @cite_15 and then effectively used by @cite_41 and @cite_40 . Contrary to us, they all produced frontal faces which are presumably better aligned and easier to compare. They did not use other transformations to generate new images (e.g., other poses, facial expressions). More importantly, their images were used to reduce appearance variability, whereas we propose the opposite: to dramatically increase it to improve both training and testing. | {
"cite_N": [
"@cite_41",
"@cite_40",
"@cite_15"
],
"mid": [
"2145287260",
"1916406603",
"2136863438"
],
"abstract": [
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.",
"We present a data-driven method for estimating the 3D shapes of faces viewed in single, unconstrained photos (aka \"in-the-wild\"). Our method was designed with an emphasis on robustness and efficiency - with the explicit goal of deployment in real-world applications which reconstruct and display faces in 3D. Our key observation is that for many practical applications, warping the shape of a reference face to match the appearance of a query, is enough to produce realistic impressions of the query's 3D shape. Doing so, however, requires matching visual features between the (possibly very different) query and reference images, while ensuring that a plausible face shape is produced. To this end, we describe an optimization process which seeks to maximize the similarity of appearances and depths, jointly, to those of a reference model. We describe our system for monocular face shape reconstruction and present both qualitative and quantitative experiments, comparing our method against alternative systems, and demonstrating its capabilities. Finally, as a testament to its suitability for real-world applications, we offer an open, on-line implementation of our system, providing unique means of instant 3D viewing of faces appearing in web photos."
]
} |
1603.07235 | 2302840847 | Face hallucination, which is the task of generating a high-resolution face image from a low-resolution input image, is a well-studied problem that is useful in widespread application areas. Face hallucination is particularly challenging when the input face resolution is very low (e.g., 10 x 12 pixels) and or the image is captured in an uncontrolled setting with large pose and illumination variations. In this paper, we revisit the algorithm introduced in [1] and present a deep interpretation of this framework that achieves state-of-the-art under such challenging scenarios. In our deep network architecture the global and local constraints that define a face can be efficiently modeled and learned end-to-end using training data. Conceptually our network design can be partitioned into two sub-networks: the first one implements the holistic face reconstruction according to global constraints, and the second one enhances face-specific details and enforces local patch statistics. We optimize the deep network using a new loss function for super-resolution that combines reconstruction error with a learned face quality measure in adversarial setting, producing improved visual results. We conduct extensive experiments in both controlled and uncontrolled setups and show that our algorithm improves the state of the art both numerically and visually. | Basic building blocks of our algorithm are well known neural network architectures such as encoder @cite_8 @cite_28 @cite_4 , convolutional @cite_17 , and deconvolutional @cite_31 @cite_41 neural nets. Our architectural design enables effective learning of global and local constraints that are important for face upsampling task using these well-known building blocks. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_41",
"@cite_31",
"@cite_17"
],
"mid": [
"2184188583",
"2100495367",
"2025768430",
"1903029394",
"",
"2310919327"
],
"abstract": [
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"",
""
]
} |
1603.07235 | 2302840847 | Face hallucination, which is the task of generating a high-resolution face image from a low-resolution input image, is a well-studied problem that is useful in widespread application areas. Face hallucination is particularly challenging when the input face resolution is very low (e.g., 10 x 12 pixels) and or the image is captured in an uncontrolled setting with large pose and illumination variations. In this paper, we revisit the algorithm introduced in [1] and present a deep interpretation of this framework that achieves state-of-the-art under such challenging scenarios. In our deep network architecture the global and local constraints that define a face can be efficiently modeled and learned end-to-end using training data. Conceptually our network design can be partitioned into two sub-networks: the first one implements the holistic face reconstruction according to global constraints, and the second one enhances face-specific details and enforces local patch statistics. We optimize the deep network using a new loss function for super-resolution that combines reconstruction error with a learned face quality measure in adversarial setting, producing improved visual results. We conduct extensive experiments in both controlled and uncontrolled setups and show that our algorithm improves the state of the art both numerically and visually. | Recently, generative adversarial networks (GANs) @cite_22 have been proposed as an alternative to learn deep generative models. In GAN framework, a generative network learns to generate samples from a given data distribution, while simultaneously a discriminative network learns to identify the samples that are generated from this network. Since then, GANs have been successfully used for image @cite_22 @cite_18 , scene @cite_15 , and sequence synthesis @cite_42 tasks. In this paper, we use GAN framework to learn a discriminative network which evaluates face quality, while at the same time optimizing the face super-resolution network according to the learned quality measure. | {
"cite_N": [
"@cite_18",
"@cite_42",
"@cite_22",
"@cite_15"
],
"mid": [
"2951523806",
"2175711684",
"2099471712",
"2173520492"
],
"abstract": [
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world changes. Here, we explore the internal models developed by deep neural networks trained using a loss based on predicting future frames in synthetic video sequences, using a CNN-LSTM-deCNN framework. We first show that this architecture can achieve excellent performance in visual sequence prediction tasks, including state-of-the-art performance in a standard 'bouncing balls' dataset (, 2009). Using a weighted mean-squared error and adversarial loss (, 2014), the same architecture successfully extrapolates out-of-the-plane rotations of computer-generated faces. Furthermore, despite being trained end-to-end to predict only pixel-level information, our Predictive Generative Networks learn a representation of the latent structure of the underlying three-dimensional objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. We argue that prediction can serve as a powerful unsupervised loss for learning rich internal representations of high-level object features.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1603.07044 | 2303829361 | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10 improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. @cite_14 and @cite_2 utilized rich non-textual features such as answer's profile. @cite_17 syntactically analyzed the question and extracted name entity features. @cite_15 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_17",
"@cite_2"
],
"mid": [
"2112644606",
"2102956348",
"",
"2057415299"
],
"abstract": [
"Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q A system, accuracy can be increased by as much as 20 overall.",
"New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"",
"Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality."
]
} |
1603.07044 | 2303829361 | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10 improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer @cite_11 @cite_12 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task. | {
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2251181482",
"2153045205"
],
"abstract": [
"Continuous word and phrase vectors have proven useful in a number of NLP tasks. Here we describe our experience using them as a source of features for the SemEval-2015 task 3, consisting of two community question answering subtasks: Answer Selection for categorizing answers as potential, good, and bad with regards to their corresponding questions; and YES NO inference for predicting a yes, no, or unsure response to a YES NO question using all of its good answers. Our system ranked 6th and 1st in the English answer selection and YES NO inference subtasks respectively, and 2nd in the Arabic answer selection subtask.",
"In this paper, we describe our system for SemEval-2015 Task 3: Answer Selection in Community Question Answering. In this task, the systems are required to identify the good or potentially good answers from the answer thread in Community Question Answering collections. Our system combines 16 features belong to 5 groups to predict answer quality. Our final model achieves the best result in subtask A for English, both in accuracy and F1score."
]
} |
1603.07044 | 2303829361 | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10 improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | Very recent work on answer selection also involved the use of neural networks. @cite_8 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. @cite_7 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean max pooling on top of each time steps. | {
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"1957644519",
"2251202616"
],
"abstract": [
"We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3 on a test set, which indicates a great potential for practical use.",
"In this paper, we present an approach that address the answer sentence selection problem for question answering. The proposed method uses a stacked bidirectional Long-Short Term Memory (BLSTM) network to sequentially read words from question and answer sentences, and then outputs their relevance scores. Unlike prior work, this approach does not require any syntactic parsing or external knowledge resources such as WordNet which may not be available in some domains or languages. The full system is based on a combination of the stacked BLSTM relevance model and keywords matching. The results of our experiments on a public benchmark dataset from TREC show that our system outperforms previous work which requires syntactic features and external knowledge resources."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | . Several methods have been proposed for human pose estimation, including edge-based histograms of the human-body @cite_55 and silhouette contours @cite_21 . More general techniques using pictorial structures @cite_39 @cite_36 @cite_49 and deformable part models @cite_15 , continued to build appearance models for each local body part independently. Subsequently, higher-level part-based models were developed to capture more complex body part relationships and obtain more discriminative templates @cite_9 @cite_3 @cite_43 @cite_19 @cite_54 . | {
"cite_N": [
"@cite_36",
"@cite_55",
"@cite_21",
"@cite_9",
"@cite_54",
"@cite_3",
"@cite_39",
"@cite_43",
"@cite_19",
"@cite_49",
"@cite_15"
],
"mid": [
"2143478373",
"",
"2116735535",
"2097151019",
"2009647132",
"2143487029",
"2030536784",
"132911827",
"2128271252",
"2074587583",
"2168356304"
],
"abstract": [
"In this work, we address the problem of estimating 2d human pose from still images. Recent methods that rely on discriminatively trained deformable parts organized in a tree model have shown to be very successful in solving this task. Within such a pictorial structure framework, we address the problem of obtaining good part templates by proposing novel, non-linear joint regressors. In particular, we employ two-layered random forests as joint regressors. The first layer acts as a discriminative, independent body part classifier. The second layer takes the estimated class distributions of the first one into account and is thereby able to predict joint locations by modeling the interdependence and co-occurrence of the parts. This results in a pose estimation framework that takes dependencies between body parts already for joint localization into account and is thus able to circumvent typical ambiguities of tree structures, such as for legs and arms. In the experiments, we demonstrate that our body parts dependent joint regressors achieve a higher joint localization accuracy than tree-based state-of-the-art methods.",
"",
"We present an image-based approach to infer 3D structure parameters using a probabilistic \"shape+structure\" model. The 3D shape of an object class is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras, while structural features of interest on the object are denoted by a number of 3D locations. A prior density over the multiview shape and corresponding structure is constructed with a mixture of probabilistic principal components analyzers. Given a novel set of contours, we infer the unknown structure parameters from the new shape's Bayesian reconstruction. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and it works even with only a single input view. Using a training set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.",
"In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.",
"",
"Typical approaches to articulated pose estimation combine spatial modelling of the human body with appearance modelling of body parts. This paper aims to push the state-of-the-art in articulated pose estimation in two ways. First we explore various types of appearance representations aiming to substantially improve the body part hypotheses. And second, we draw on and combine several recently proposed powerful ideas such as more flexible spatial models as well as image-conditioned spatial models. In a series of experiments we draw several important conclusions: (1) we show that the proposed appearance representations are complementary, (2) we demonstrate that even a basic tree-structure spatial human body model achieves state-of-the-art performance when augmented with the proper appearance representation, and (3) we show that the combination of the best performing appearance model with a flexible image-conditioned spatial model achieves the best result, significantly improving over the state of the art, on the Leeds Sports Poses'' and Parse'' benchmarks.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"While human pose estimation (HPE) techniques usually process each test image independently, in real applications images come in collections containing interdependent images. Often several images have similar backgrounds or show persons wearing similar clothing (foreground). We present a novel human pose estimation technique to exploit these dependencies by sharing appearance models between images. Our technique automatically determines which images in the collection should share appearance. We extend the state-of-the art HPE model of Yang and Ramanan to include our novel appearance sharing cues and demonstrate on the highly challenging Leeds Sports Poses dataset that they lead to better results than traditional single-image pose estimation.",
"We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.",
"We present a technique for estimating the spatial layout of humans in still images--the position of the head, torso and arms. The theme we explore is that once a person is localized using an upper body detector, the search for their body parts can be considerably simplified using weak constraints on position and appearance arising from that detection. Our approach is capable of estimating upper body pose in highly challenging uncontrolled images, without prior knowledge of background, clothing, lighting, or the location and scale of the person in the image. People are only required to be upright and seen from the front or the back (not side). We evaluate the stages of our approach experimentally using ground truth layout annotation on a variety of challenging material, such as images from the PASCAL VOC 2008 challenge and video frames from TV shows and feature films. We also propose and evaluate techniques for searching a video dataset for people in a specific pose. To this end, we develop three new pose descriptors and compare their classification and retrieval performance to two baselines built on state-of-the-art object detection models.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | These models continued to evolve, attempting to capture even higher-level part features. Convolutional networks @cite_6 @cite_44 , a class of representation learning methods @cite_52 , began to exhibit performance gains not only in human pose estimation, but various areas of computer vision @cite_0 . Since valid human poses represent a much lower-dimensional manifold in the high-dimensional input space, it is difficult to directly regress from input image to output poses with a convolutional network. As a solution to this, researchers framed the problem as a multi-task learning problem where human joints must be first detected then precisely localized @cite_20 @cite_22 @cite_10 . @cite_30 enforce global pose consistency with a Markov random field representing human anatomical constraints. Follow up work by @cite_28 combines a convolutional network part-detector with a part-based spatial model into a unified framework. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_28",
"@cite_52",
"@cite_6",
"@cite_44",
"@cite_0",
"@cite_10",
"@cite_20"
],
"mid": [
"1816678934",
"1948369226",
"2136391815",
"2163922914",
"2154579312",
"2951005624",
"",
"2949812103",
"2052678124"
],
"abstract": [
"Abstract: This paper introduces a new architecture for human pose estimation using a multi- layer convolutional network architecture and a modified learning technique that learns low-level features and higher-level weak spatial models. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows significant improvement over the current state-of-the-art results. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to outperform all existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on features that might even just cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent then expected. Many researchers previously argued that the kinematic structure and top-down information is crucial for this domain, but with our purely bottom up, and weak spatial model, we could improve other more complicated architectures that currently produce the best results. This mirrors what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced.",
"We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"This paper proposes a new hybrid architecture that consists of a deep Convolu-tional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"",
"This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.",
"We propose a heterogeneous multi-task learning framework for human pose estimation from monocular images using a deep convolutional neural network. In particular, we simultaneously learn a human pose regressor and sliding-window body-part and joint-point detectors in a deep network architecture. We show that including the detection tasks helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several datasets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | Because human pose estimation is ultimately a structured prediction task, it is difficult for convolutional networks to correctly regress the full pose in a single pass. Recently, iterative refinement techniques have been proposed to address this issue. In @cite_31 , proposed a multi-stage system of convolutional networks for predicting facial point locations. Each stage refines the output from the previous stage given a local region of the input. Building on this work, DeepPose @cite_40 uses a cascade of convolutional networks for full-body pose estimation. In another body of work, instead of predicting absolute human joint locations, @cite_51 refine pose estimates by predicting error feedback (i.e. corrections) at each iteration. | {
"cite_N": [
"@cite_40",
"@cite_31",
"@cite_51"
],
"mid": [
"2113325037",
"1976948919",
"1537698211"
],
"abstract": [
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | . Both generative and discriminative models have been proposed. Generative models (i.e. top-down approaches) fit a human body template, with parametric or non-parametric methods, to the input data. Dense point clouds provided by depth sensors motivate the use of iterative closest point algorithms @cite_41 @cite_32 @cite_11 @cite_7 and database lookups @cite_47 . To further constrain the output space similar to RGB methods, graphical models @cite_59 @cite_38 impose kinematic constraints to improve full-body pose estimation. Other methods such as kernel methods with kinematic chain structures @cite_1 and template fitting with Gaussian mixture models @cite_34 have been proposed. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_41",
"@cite_1",
"@cite_32",
"@cite_59",
"@cite_47",
"@cite_34",
"@cite_11"
],
"mid": [
"",
"2169172998",
"64404897",
"1900223672",
"1515963346",
"2081198212",
"2157938685",
"1977039804",
"1709959390"
],
"abstract": [
"",
"This paper proposes a tracking system called VooDoo for 3D tracking of human body movements based on a 3D body model and the iterative closest point (ICP) algorithm. The proposed approach is able to incorporate raw data from different input sensors, as well as results from feature trackers in 2D or 3D. All input data is processed within the same model fitting step by modeling all input measurements in 3D model space. The system has been implemented and runs in realtime at appr. 10-14 Hz. Experiments with complex human movements exhibit the characteristics and advantages of the proposed approach",
"Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.",
"In this paper, we address the problem of human pose estimation through a novel articulated Gaussian kernel correlation function which is applied to human pose tracking from a single depth sensor. We first derive a unified Gaussian kernel correlation that can generalize the previous Sum-of-Gaussians (SoG)-based methods for the similarity measure between a template and the observation. Furthermore, we develop an articulated Gaussian kernel correlation by embedding a tree-structured skeleton model, which enables us to estimate the full-body pose parameters. Also, the new kernel correlation framework can easily penalize undesired body intersection which is more natural than the clamping function in previous methods. Our algorithm is general, simple yet effective and can achieve real-time performance. The experimental results on a public depth dataset are promising and competitive when compared with state-of-the-art algorithms.",
"This paper focuses on real-time markerless motion capture. The body pose of a person is estimated from depth images using an Iterative Closest Point algorithm. We present a very efficient approach, that estimates up to 28 degrees of freedom from 1000 data points with 4Hz. This is achieved by nonlinear optimization techniques using an analytically derived Jacobian and highly optimized correspondence search.",
"Depth-images-based human pose estimation is facing two challenges: how to extract features which are discriminative to variations in human poses and robust against noise, and how to reliably learn body joints based on their dependence structure. To tackle the first problem, we propose a novel 3D Local Shape Context feature extracted from human body silhouette to characterise the local structure of body joints. To tackle the second problem, we incorporate a graphical model into regression forests to exploit structural constrains. Experiments demonstrate that our method can efficiently learn local body structures and localise joints. Compared with the state-of-the-art methods, our method significantly improves the accuracy of pose estimation from depth images.",
"This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals. The key of our pose estimation component is to embed the articulated deformation model with exponential-maps-based parametrization into a Gaussian Mixture Model. Benefiting from the probabilistic measurement model, our algorithm requires no explicit point correspondences as opposed to most existing methods. Consequently, our approach is less sensitive to local minimum and well handles fast and complex motions. Extensive evaluations on publicly available datasets demonstrate that our method outperforms most state-of-art pose estimation algorithms with large margin, especially in the case of challenging motions. Moreover, our novel shape adaptation algorithm based on the same probabilistic model automatically captures the shape of the subjects during the dynamic pose estimation process. Experiments show that our shape estimation method achieves comparable accuracy with state of the arts, yet requires neither parametric model nor extra calibration procedure.",
"The iterative closest point (ICP) algorithm [2] is a popular method for modeling 3D objects from range data. The classical ICP algorithm rests on a rigid surface assumption. Building on recent work on nonrigid object models [5; 16; 9], this paper presents an ICP algorithm capable of modeling nonrigid objects, where individual scans may be subject to local deformations. We describe an integrated mathematical framework for simultaneously registering scans and recovering the surface configuration. To tackle the resulting high-dimensional optimization problems, we introduce a hierarchical method that first matches a coarse skeleton of scan points, then adapts local scan patches. The approach is implemented for a mobile robot capable of acquiring 3D models of objects."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | Discriminative methods (i.e. bottom-up approaches) detect instances of body parts instead of fitting a skeleton template. In @cite_8 , trained a random forest classifier for body part segmentation from a single depth image and used mean shift to estimate joint locations. This work inspired an entire line of depth-based pose estimation research exploring regression tree methods: Hough forests @cite_42 , random ferns @cite_23 , and random tree walks @cite_26 have been proposed in recent years. | {
"cite_N": [
"@cite_42",
"@cite_26",
"@cite_23",
"@cite_8"
],
"mid": [
"2000205118",
"1912860515",
"2237001323",
"2172156083"
],
"abstract": [
"We present a new approach to general-activity human pose estimation from depth images, building on Hough forests. We extend existing techniques in several ways: real time prediction of multiple 3D joints, explicit learning of voting weights, vote compression to allow larger training sets, and a comparison of several decision-tree training objectives. Key aspects of our work include: regression directly from the raw depth image, without the use of an arbitrary intermediate representation; applicability to general motions (not constrained to particular activities) and the ability to localize occluded as well as visible body joints. Experimental results demonstrate that our method produces state of the art results on several data sets including the challenging MSRC-5000 pose estimation test set, at a speed of about 200 frames per second. Results on silhouettes suggest broader applicability to other imaging modalities.",
"The availability of accurate depth cameras have made real-time human pose estimation possible; however, there are still demands for faster algorithms on low power processors. This paper introduces 1000 frames per second pose estimation method on a single core CPU. A large computation gain is achieved by random walk sub-sampling. Instead of training trees for pixel-wise classification, a regression tree is trained to estimate the probability distribution to the direction toward the particular joint, relative to the current position. At test time, the direction for the random walk is randomly chosen from a set of representative directions. The new position is found by a constant step toward the direction, and the distribution for next direction is found at the new position. The continual random walk through 3D space will eventually produce an expectation of step positions, which we estimate as the joint position. A regression tree is built separately for each joint. The number of random walk steps can be assigned for each joint so that the computation time is consistent regardless of the size of body segmentation. The experiments show that even with large computation gain, the accuracy is higher or comparable to the state-of-the-art pose estimation methods.",
"In recent years, many systems for motion analysis of infants have been developed which either use markers or lack 3D information. We propose a system that can be trained fast and flexibly to fit the requirements of markerless 3D movement analysis of infants. Random Ferns are used as an efficient and robust alternative to Random Forests to find the 3D positions of body joints in single depth images. The training time is reduced by several orders of magnitude compared to the Kinect approach using a similar amount of data. Our system is trained in 9 hours on a 32 core workstation opposed to 24 hours on a 1000 core cluster, achieving comparable accuracy to the Kinect SDK on a publicly available pose estimation benchmark dataset containing adults. On manually annotated recordings of an infant, we obtain an average distance error over all joints of 41 mm. Building on the proposed approach, we aim to develop an automated, unintrusive, cheap and objective system for the early detection of infantile movement disorders like cerebral palsy using 3D motion analysis techniques.",
"We propose a new method to quickly and accurately predict 3D positions of body joints from a single depth image, using no temporal information. We take an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem. Our large and highly varied training dataset allows the classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs at 200 frames per second on consumer hardware. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state of the art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching."
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | . One popular approach to model occlusions is to treat visibility as a binary mask and jointly reason on this mask with the input images @cite_18 @cite_4 . Other approaches such as @cite_29 @cite_33 , include templates for occluded versions of each part. More sophisticated models introduce occlusion priors @cite_35 @cite_50 or semantic information @cite_48 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_48",
"@cite_29",
"@cite_50"
],
"mid": [
"2000723188",
"1922481186",
"2095727456",
"",
"2125337569",
"166750225",
""
],
"abstract": [
"We present a unified occlusion model for object instance detection under arbitrary viewpoint. Whereas previous approaches primarily modeled local coherency of occlusions or attempted to learn the structure of occlusions from data, we propose to explicitly model occlusions by reasoning about 3D interactions of objects. Our approach accurately represents occlusions under arbitrary viewpoint without requiring additional training data, which can often be difficult to obtain. We validate our model by incorporating occlusion reasoning with the state-of-the-art LINE2D and Gradient Network methods for object instance detection and demonstrate significant improvement in recognizing texture-less objects under severe occlusions.",
"Human pose estimation from depth data has made significant progress in recent years and commercial sensors estimate human poses in real-time. However, state-of-the-art methods fail in many situations when the humans are partially occluded by objects. In this work, we introduce a semantic occlusion model that is incorporated into a regression forest approach for human pose estimation from depth data. The approach exploits the context information of occluding objects like a table to predict the locations of occluded joints. In our experiments on synthetic and real data, we show that our occlusion model increases the joint estimation accuracy and outperforms the commercial Kinect 2 SDK for occluded joints.",
"We propose a structured Hough voting method for detecting objects with heavy occlusion in indoor environments. First, we extend the Hough hypothesis space to include both object location and its visibility pattern, and design a new score function that accumulates votes for object detection and occlusion prediction. In addition, we explore the correlation between objects and their environment, building a depth-encoded object-context model based on RGB-D data. Particularly, we design a layered context representation and allow image patches from both objects and backgrounds voting for the object hypotheses. We demonstrate that using a data-driven 2.1D representation we can learn visual codebooks with better quality, and more interpretable detection results in terms of spatial relationship between objects and viewer. We test our algorithm on two challenging RGB-D datasets with significant occlusion and intraclass variation, and demonstrate the superior performance of our method.",
"",
"The bounding box representation employed by many popular object detection models [3, 6] implicitly assumes all pixels inside the box belong to the object. This assumption makes this representation less robust to the object with occlusion [16]. In this paper, we augment the bounding box with a set of binary variables each of which corresponds to a cell indicating whether the pixels in the cell belong to the object. This segmentation-aware representation explicitly models and accounts for the supporting pixels for the object within the bounding box thus more robust to occlusion. We learn the model in a structured output framework, and develop a method that efficiently performs both inference and learning using this rich representation. The method is able to use segmentation reasoning to achieve improved detection results with richer output (cell level segmentation) on the Street Scenes and Pascal VOC 2007 datasets. Finally, we present a globally coherent object model using our rich representation to account for object-object occlusion resulting in a more coherent image understanding.",
"Deformable part-based models [1, 2] achieve state-of-the-art performance for object detection, but rely on heuristic initialization during training due to the optimization of non-convex cost function. This paper investigates limitations of such an initialization and extends earlier methods using additional supervision. We explore strong supervision in terms of annotated object parts and use it to (i) improve model initialization, (ii) optimize model structure, and (iii) handle partial occlusions. Our method is able to deal with sub-optimal and incomplete annotations of object parts and is shown to benefit from semi-supervised learning setups where part-level annotation is provided for a fraction of positive examples only. Experimental results are reported for the detection of six animal classes in PASCAL VOC 2007 and 2010 datasets. We demonstrate significant improvements in detection performance compared to the LSVM [1] and the Poselet [3] object detectors.",
""
]
} |
1603.07076 | 2953271464 | We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints. | For rigid body pose estimation and 3D object analysis, several descriptors have been proposed. Given the success of SIFT @cite_56 , there have been several attempts at embedding rotational and translational invariance @cite_57 @cite_37 @cite_62 . Other features such as viewpoint invariant 3D feature maps @cite_66 , histograms of 3D joint locations @cite_63 , multifractal spectrum @cite_27 , volumetric attention models @cite_65 , and volumetric convolutional filters @cite_64 @cite_58 have been proposed for 3D modeling. Instead of proposing invariant features, @cite_24 trained a classifier for each viewpoint. Building on the success of representation learning from RGB, discriminative pose estimation from the depth domain, viewpoint invariant features, and occlusion modeling, we design a model which achieves viewpoint invariant 3D human pose estimation. | {
"cite_N": [
"@cite_37",
"@cite_62",
"@cite_64",
"@cite_65",
"@cite_56",
"@cite_57",
"@cite_24",
"@cite_27",
"@cite_63",
"@cite_58",
"@cite_66"
],
"mid": [
"2133605096",
"",
"1493004075",
"2472272718",
"2124386111",
"2123456673",
"",
"2014801693",
"2145546283",
"2211722331",
"2111187045"
],
"abstract": [
"The robust alignment of images and scenes seen from widely different viewpoints is an important challenge for camera and scene reconstruction. This paper introduces a novel class of viewpoint independent local features for robust registration and novel algorithms to use the rich information of the new features for 3D scene alignment and large scale scene reconstruction. The key point of our approach consists of leveraging local shape information for the extraction of an invariant feature descriptor. The advantages of the novel viewpoint invariant patch (VIP) are: that the novel features are invariant to 3D camera motion and that a single VIP correspondence uniquely defines the 3D similarity transformation between two scenes. In the paper we demonstrate how to use the properties of the VIPs in an efficient matching scheme for 3D scene alignment. The algorithm is based on a hierarchical matching method which tests the components of the similarity transformation sequentially to allow efficient matching and 3D scene alignment. We evaluate the novel features on real data with known ground truth information and show that the features can be used to reconstruct large scale urban scenes.",
"",
"We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopters from LiDAR point clouds. Current methods for this problem rely on heuristic rules and use simple geometric features. These heuristics break down in the presence of low vegetation, as they do not distinguish between vegetation that may be landed on and solid objects that should be avoided. We evaluate the system with a combination of real and synthetic range data. We show our system outperforms various benchmarks, including a system integrating various hand-crafted point cloud features from the literature.",
"We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.",
"An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.",
"We propose a novel and robust model to represent and learn generic 3D object categories. We aim to solve the problem of true 3D object categorization for handling arbitrary rotations and scale changes. Our approach is to capture a compact model of an object category by linking together diagnostic parts of the objects from different viewing points. We emphasize on the fact that our \"parts\" are large and discriminative regions of the objects that are composed of many local invariant features. Instead of recovering a full 3D geometry, we connect these parts through their mutual homographic transformation. The resulting model is a compact summarization of both the appearance and geometry information of the object class. We propose a framework in which learning is done via minimal supervision compared to previous works. Our results on categorization show superior performances to state-of-the-art algorithms such as (, 2006). Furthermore, we have compiled a new 3D object dataset that consists of 10 different object categories. We have tested our algorithm on this dataset and have obtained highly promising results.",
"",
"Image texture provides a rich visual description of the surfaces in the scene. Many texture signatures based on various statistical descriptions and various local measurements have been developed. Existing signatures, in general, are not invariant to 3D geometric transformations, which is a serious limitation for many applications. In this paper we introduce a new texture signature, called the multifractal spectrum (MFS). The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. It provides an efficient framework combining global spatial invariance and local robust measurements. Intuitively, the MFS could be viewed as a \"better histogram\" with greater robustness to various environmental changes and the advantage of capturing some geometrical distribution information encoded in the texture. Experiments demonstrate that the MFS codes the essential structure of textures with very low dimension, and thus represents an useful tool for texture classification.",
"In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skelet al joint locations from Kinect depth maps using 's method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms [25] on most of the cases.",
"Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"This paper presents a 3D approach to multi-view object class detection. Most existing approaches recognize object classes for a particular viewpoint or combine classifiers for a few discrete views. We propose instead to build 3D representations of object classes which allow to handle viewpoint changes and intra-class variability. Our approach extracts a set of pose and class discriminant features from synthetic 3D object models using a filtering procedure, evaluates their suitability for matching to real image data and represents them by their appearance and 3D position. We term these representations 3D Feature Maps. For recognizing an object class in an image we match the synthetic descriptors to the real ones in a 3D voting scheme. Geometric coherence is reinforced by means of a robust pose estimation which yields a 3D bounding box in addition to the 2D localization. The precision of the 3D pose estimation is evaluated on a set of images of a calibrated scene. The 2D localization is evaluated on the PASCAL 2006 dataset for motorbikes and cars, showing that its performance can compete with state-of-the-art 2D object detectors."
]
} |
1603.07012 | 2951368271 | Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | The development of large lexical resources, such as WordNet @cite_17 and BabelNet @cite_10 , has enabled knowledge-based algorithms which show promising results on all-words prediction tasks @cite_35 @cite_9 @cite_25 . WSD algorithms based on supervised learning are generally believed to perform better than knowledge-based WSD algorithms, but they need large training sets to perform well @cite_26 @cite_0 @cite_33 @cite_24 . Acquiring large training sets is costly. In this paper, we show that a supervised WSD algorithm can perform well with @math training examples per sense. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_9",
"@cite_0",
"@cite_24",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"2250718062",
"",
"2436001372",
"2251529656",
"1988325893",
"2101293500",
"2120699290",
"2131540451",
""
],
"abstract": [
"In this paper, we present a novel approach to joint word sense disambiguation (WSD) and entity linking (EL) that combines a set of complementary objectives in an extensible multi-objective formalism. During disambiguation the system performs continuous optimization to find optimal probability distributions over candidate senses. The performance of our system on nominal WSD as well as EL improves state-ofthe-art results on several corpora. These improvements demonstrate the importance of combining complementary objectives in a joint model for robust disambiguation.",
"",
"Word sense disambiguation (WSD) is the ability to identify the meaning of words in context in a computational manner. WSD is considered an AI-complete problem, that is, a task whose solution is at least as hard as the most difficult problems in artificial intelligence. We introduce the reader to the motivations for solving the ambiguity of words and provide a description of the task. We overview supervised, unsupervised, and knowledge-based approaches. The assessment of WSD systems is discussed in the context of the Senseval Semeval campaigns, aiming at the objective evaluation of systems participating in several different disambiguation tasks. Finally, applications, open problems, and future directions are discussed.",
"This paper presents the SemEval-2013 task on multilingual Word Sense Disambiguation. We describe our experience in producing a multilingual sense-annotated corpus for the task. The corpus is tagged with BabelNet 1.1.1, a freely-available multilingual encyclopedic dictionary and, as a byproduct, WordNet 3.0 and the Wikipedia sense inventory. We present and analyze the results of participating systems, and discuss future directions.",
"This paper presents the coarse-grained English all-words task at SemEval-2007. We describe our experience in producing a coarse version of the WordNet sense inventory and preparing the sense-tagged corpus for the task. We present the results of participating systems and discuss future directions.",
"Word sense disambiguation (WSD) systems based on supervised learning achieved the best performance in SensEval and SemEval workshops. However, there are few publicly available open source WSD systems. This limits the use of WSD in other applications, especially for researchers whose research interests are not in WSD. In this paper, we present IMS, a supervised English all-words WSD system. The flexible framework of IMS allows users to integrate different preprocessing tools, additional features, and different classifiers. By default, we use linear support vector machines as the classifier with multiple knowledge-based features. In our implementation, IMS achieves state-of-the-art results on several SensEval and SemEval tasks.",
"We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multilingual semantic network. Key to our approach is the integration of lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition, Machine Translation is applied to enrich the resource with lexical information for all languages. We first conduct in vitro experiments on new and existing gold-standard datasets to show the high quality and coverage of BabelNet. We then show that our lexical resource can be used successfully to perform both monolingual and cross-lingual Word Sense Disambiguation: thanks to its wide lexical coverage and novel semantic relations, we are able to achieve state-of the-art results on three different SemEval evaluation tasks.",
"In this paper we present the Multilingual AllWords Sense Disambiguation and Entity Linking task. Word Sense Disambiguation (WSD) and Entity Linking (EL) are well-known problems in the Natural Language Processing field and both address the lexical ambiguity of language. Their main difference lies in the kind of meaning inventories that are used: EL uses encyclopedic knowledge, while WSD uses lexicographic information. Our aim with this task is to analyze whether, and if so, how, using a resource that integrates both kinds of inventories (i.e., BabelNet 2.5.1) might enable WSD and EL to be solved by means of similar (even, the same) methods. Moreover, we investigate this task in a multilingual setting and for some specific domains.",
""
]
} |
1603.07012 | 2951368271 | Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | In the past few years, much progress has been made on using neural networks to learn word embeddings @cite_30 @cite_32 , to construct language models @cite_18 , perform sentiment analysis @cite_28 , machine translation @cite_27 and many other NLP applications. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_28",
"@cite_32",
"@cite_27"
],
"mid": [
"2950133940",
"",
"2251939518",
"2125031621",
"2949888546"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by , and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1603.07012 | 2951368271 | Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | In our work, we start with a baseline classifier which uses @math -dimensional embeddings trained on a @math billion word news corpus using Word2Vec @cite_30 . The vocabulary consists of the most frequent @math words, without lemmatization or case normalization. Sense embeddings are computed by averaging the context embeddings of sentences which have been labeled with that sense. To classify a word in a context, we assign the word sense whose embedding has maximal cosine similarity with the embedding of the context. This classifier has similar performance to the best classifier in @cite_34 when SemCor is used as a source of labeled sentences. The Word2Vec embeddings are trained using a bag of words model, i.e. without considering word order in the training context, and word order is also not considered in the classification context. In we show that using a more expressive language model which takes account of word order yields significant improvements. | {
"cite_N": [
"@cite_30",
"@cite_34"
],
"mid": [
"2950133940",
"2518202280"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"Recent years have seen a dramatic growth in the popularity of word embeddings mainly owing to their ability to capture semantic information from massive amounts of textual content. As a result, many tasks in Natural Language Processing have tried to take advantage of the potential of these distributional models. In this work, we study how word embeddings can be used in Word Sense Disambiguation, one of the oldest tasks in Natural Language Processing and Artificial Intelligence. We propose different methods through which word embeddings can be leveraged in a state-of-the-art supervised WSD system architecture, and perform a deep analysis of how different parameters affect performance. We show how a WSD system that makes use of word embeddings alone, if designed properly, can provide significant performance improvement over a state-ofthe-art WSD system that incorporates several standard WSD features."
]
} |
1603.07012 | 2951368271 | Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | Semi-supervised learning has previously been applied successfully to word sense disambiguation. In @cite_21 bootstrapping was used to learn a high precision WSD classifier. A low recall classifier was learned from a small set of labeled examples, and the labeled set then extended with those sentences from an unlabeled corpus which the classifier could label with high confidence. The classifier was then retrained, and this iterative training process continued to convergence. Additional heuristics helped to maintain the stability of the bootstrapping process. The method was evaluated on a small data set. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2101210369"
],
"abstract": [
"This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ."
]
} |
1603.07012 | 2951368271 | Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs. | In @cite_36 , a label propagation algorithm was proposed for word sense disambiguation and compared to bootstrapping and a SVM supervised classifier. Label propagation can achieve better performance because it assigns labels to optimize a global objective, whereas bootstrapping propagates labels based on local similarity of examples. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2127358574"
],
"abstract": [
"Shortage of manually sense-tagged data is an obstacle to supervised word sense disambiguation methods. In this paper we investigate a label propagation based semi-supervised learning algorithm for WSD, which combines labeled and unlabeled data in learning process to fully realize a global consistency assumption: similar examples should have similar labels. Our experimental results on benchmark corpora indicate that it consistently outperforms SVM when only very few labeled examples are available, and its performance is also better than monolingual bootstrapping, and comparable to bilingual bootstrapping."
]
} |
1603.06987 | 2951483660 | When given a single frame of the video, humans can not only interpret the content of the scene, but also they are able to forecast the near future. This ability is mostly driven by their rich prior knowledge about the visual world, both in terms of (i) the dynamics of moving agents, as well as (ii) the semantic of the scene. In this work we exploit the interplay between these two key elements to predict scene-specific motion patterns. First, we extract patch descriptors encoding the probability of moving to the adjacent patches, and the probability of being in that particular patch or changing behavior. Then, we introduce a Dynamic Bayesian Network which exploits this scene specific knowledge for trajectory prediction. Experimental results demonstrate that our method is able to accurately predict trajectories and transfer predictions to a novel scene characterized by similar elements. | In activity recognition @cite_19 @cite_5 @cite_22 the main aim is to discover and label the actions of the agents observed in the scene. Depending on the level of granularity, we could be interested in atomic actions such as pedestrians walking in the scene, or in activities involving a group of people @cite_2 @cite_8 @cite_3 @cite_18 @cite_31 . Trajectory-based activity analysis is a traditional way of modeling human activities @cite_19 @cite_24 , which leverages motion statistics to recognize different activities. In this context, some recent work has shown that by directly modeling the impact of the environment, like trajectories of nearby pedestrians, can lead to better recognition models. To this end, social forces and contextual relationships have been effectively used to model human-human interactions in crowded environments @cite_27 @cite_7 @cite_39 @cite_17 @cite_0 @cite_9 . Some other works show also that prior knowledge of goals yields better human-activity recognition and tracking @cite_12 @cite_33 @cite_21 . From a complementary perspective, @cite_13 @cite_6 take advantage of trajectories to infer the semantic of the elements of the scene, such as road and sidewalk. | {
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_18",
"@cite_8",
"@cite_21",
"@cite_39",
"@cite_17",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_12",
"@cite_33",
"@cite_9",
"@cite_24",
"@cite_0",
"@cite_31",
"@cite_13"
],
"mid": [
"2171544105",
"100367037",
"2163415258",
"2054225134",
"",
"1986517930",
"2225887246",
"",
"",
"2532516272",
"1711926650",
"2037865596",
"2164489414",
"2122469558",
"2090229683",
"2424778531",
"2133235827",
"2020209171",
"2950240388",
"2584479935"
],
"abstract": [
"This paper addresses a new problem, that of multiscale activity recognition. Our goal is to detect and localize a wide range of activities, including individual actions and group activities, which may simultaneously co-occur in high-resolution video. The video resolution allows for digital zoom-in (or zoom-out) for examining fine details (or coarser scales), as needed for recognition. The key challenge is how to avoid running a multitude of detectors at all spatiotemporal scales, and yet arrive at a holistically consistent video interpretation. To this end, we use a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. The AND-OR graph allows a principled formulation of efficient, cost-sensitive inference via an explore-exploit strategy. Our inference optimally schedules the following computational processes: 1) direct application of activity detectors --- called α process; 2) bottom-up inference based on detecting activity parts --- called β process; and 3) top-down inference based on detecting activity context --- called γ process. The scheduling iteratively maximizes the log-posteriors of the resulting parse graphs. For evaluation, we have compiled and benchmarked a new dataset of high-resolution videos of group and individual activities co-occurring in a courtyard of the UCLA campus.",
"We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date.",
"We propose a discriminative model for recognizing group activities. Our model jointly captures the group activity, the individual person actions, and the interactions among them. Two new types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. Different from most of the previous latent structured models which assume a predefined structure for the hidden layer, e.g. a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. Our experimental results demonstrate that by inferring this contextual information together with adaptive structures, the proposed model can significantly improve activity recognition performance.",
"We present a novel approach for automatically discovering spatio-temporal patterns in complex dynamic scenes. Similarly to recent non-object centric methods, we use low level visual cues to detect atomic activities and then construct clip histograms. Differently from previous works, we formulate the task of discovering high level activity patterns as a prototype learning problem where the correlation among atomic activities is explicitly taken into account when grouping clip histograms. Interestingly at the core of our approach there is a convex optimization problem which allows us to efficiently extract patterns at multiple levels of detail. The effectiveness of our method is demonstrated on publicly available datasets.",
"",
"A novel approach is presented to locate dense crowd of people in a network of fixed cameras given the severely degraded background subtracted silhouettes. The problem is formulated as a sparsity constrained inverse problem using an adaptive dictionary constructed on-line. The framework has no constraint on the number of cameras neither on the surface to be monitored. Even with a single camera, partially occluded and grouped people are correctly detected and segmented. Qualitative results are presented in indoor and outdoor scenes.",
"Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth death and appearance disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method.",
"",
"",
"Object tracking typically relies on a dynamic model to predict the object's location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.",
"Existing methods for video scene analysis are primarily concerned with learning motion patterns or models for anomaly detection. We present a novel form of video scene analysis where scene element categories such as roads, parking areas, sidewalks and entrances, can be segmented and categorized based on the behaviors of moving objects in and around them. We view the problem from the perspective of categorical object recognition, and present an approach for unsupervised learning of functional scene element categories. Our approach identifies functional regions with similar behaviors in the same scene and or across scenes, by clustering histograms based on a trajectory-level, behavioral codebook. Experiments are conducted on two outdoor webcam video scenes with low frame rates and poor quality. Unsupervised classification results are presented for each scene independently, and also jointly where models learned on one scene are applied to the other.",
"We propose a novel nonparametric Bayesian model, dual hierarchical Dirichlet processes (Dual-HDP), for trajectory analysis and semantic region modeling in surveillance settings, in an unsupervised way. In our approach, trajectories are treated as documents and observations of an object on a trajectory are treated as words in a document. Trajectories are clustered into different activities. Abnormal trajectories are detected as samples with low likelihoods. The semantic regions, which are intersections of paths commonly taken by objects, related to activities in the scene are also modeled. Dual-HDP advances the existing hierarchical Dirichlet processes (HDP) language model. HDP only clusters co-occurring words from documents into topics and automatically decides the number of topics. Dual-HDP co-clusters both words and documents. It learns both the numbers of word topics and document clusters from data. Under our problem settings, HDP only clusters observations of objects, while Dual-HDP clusters both observations and trajectories. Experiments are evaluated on two data sets, radar tracks collected from a maritime port and visual tracks collected from a parking lot.",
"In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow.",
"We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints. At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets, which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results show a great improvement in performance compared to previous methods.",
"In this paper, we propose a long-term motion model for visual object tracking. In crowded street scenes, persistent occlusions are a frequent challenge for tracking algorithm and a robust, long-term motion model could help in these situations. Motivated by progresses in robot motion planning, we propose to construct a set of ‘plausible’ plans for each person, which are composed of multiple long-term motion prediction hypotheses that do not include redundancies, unnecessary loops or collisions with other objects. Constructing plausible plan is the key step in utilizing motion planning in object tracking, which has not been fully investigate in robot motion planning. We propose a novel method of efficiently constructing disjoint plans in different homotopy classes, based on winding numbers and winding angles of planned paths around all obstacles. As the goals can be specified by winding numbers and winding angles, we can avoid redundant plans in the same homotopy class and multiple whirls or loops around a single obstacle. We test our algorithm on a challenging, real-world dataset, and compare our algorithm with Linear Trajectory Avoidance and a simplified linear planning model. We find that our algorithm outperforms both algorithms in most sequences.",
"Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.",
"This paper presents a survey of trajectory-based activity analysis for visual surveillance. It describes techniques that use trajectory data to define a general set of activities that are applicable to a wide range of scenes and environments. Events of interest are detected by building a generic topographical scene description from underlying motion structure as observed over time. The scene topology is automatically learned and is distinguished by points of interest and motion characterized by activity paths. The methods we review are intended for real-time surveillance through definition of a diverse set of events for further analysis triggering, including virtual fencing, speed profiling, behavior classification, anomaly detection, and object interaction.",
"We present a novel method for multiple people tracking that leverages a generalized model for capturing interactions among individuals. At the core of our model lies a learned dictionary of interaction feature strings which capture relationships between the motions of targets. These feature strings, created from low-level image features, lead to a much richer representation of the physical interactions between targets compared to hand-specified social force models that previous works have introduced for tracking. One disadvantage of using social forces is that all pedestrians must be detected in order for the forces to be applied, while our method is able to encode the effect of undetected targets, making the tracker more robust to partial occlusions. The interaction feature strings are used in a Random Forest framework to track targets according to the features surrounding them. Results on six publicly available sequences show that our method outperforms state-of-the-art approaches in multiple people tracking.",
"Online Multiple Target Tracking (MTT) is often addressed within the tracking-by-detection paradigm. Detections are previously extracted independently in each frame and then objects trajectories are built by maximizing specifically designed coherence functions. Nevertheless, ambiguities arise in presence of occlusions or detection errors. In this paper we claim that the ambiguities in tracking could be solved by a selective use of the features, by working with more reliable features if possible and exploiting a deeper representation of the target only if necessary. To this end, we propose an online divide and conquer tracker for static camera scenes, which partitions the assignment problem in local subproblems and solves them by selectively choosing and combining the best features. The complete framework is cast as a structural learning task that unifies these phases and learns tracker parameters from examples. Experiments on two different datasets highlights a significant improvement of tracking performances (MOTA +10 ) over the state of the art.",
"In this paper, we describe an unsupervised learning framework to segment a scene into semantic regions and to build semantic scene models from long-term observations of moving objects in the scene. First, we introduce two novel similarity measures for comparing trajectories in far-field visual surveillance. The measures simultaneously compare the spatial distribution of trajectories and other attributes, such as velocity and object size, along the trajectories. They also provide a comparison confidence measure which indicates how well the measured image-based similarity approximates true physical similarity. We also introduce novel clustering algorithms which use both similarity and comparison confidence. Based on the proposed similarity measures and clustering methods, a framework to learn semantic scene models by trajectory analysis is developed. Trajectories are first clustered into vehicles and pedestrians, and then further grouped based on spatial and velocity distributions. Different trajectory clusters represent different activities. The geometric and statistical models of structures in the scene, such as roads, walk paths, sources and sinks, are automatically learned from the trajectory clusters. Abnormal activities are detected using the semantic scene models. The system is robust to low-level tracking errors."
]
} |
1603.06987 | 2951483660 | When given a single frame of the video, humans can not only interpret the content of the scene, but also they are able to forecast the near future. This ability is mostly driven by their rich prior knowledge about the visual world, both in terms of (i) the dynamics of moving agents, as well as (ii) the semantic of the scene. In this work we exploit the interplay between these two key elements to predict scene-specific motion patterns. First, we extract patch descriptors encoding the probability of moving to the adjacent patches, and the probability of being in that particular patch or changing behavior. Then, we introduce a Dynamic Bayesian Network which exploits this scene specific knowledge for trajectory prediction. Experimental results demonstrate that our method is able to accurately predict trajectories and transfer predictions to a novel scene characterized by similar elements. | Several works have shown that when a large training set is available, it is possible to use a simple approach based on data-driven visual similarity for retrieving and transferring knowledge to new queries in both image @cite_42 @cite_43 @cite_29 and video @cite_28 @cite_38 @cite_20 domains. We build on this idea for transferring the knowledge encoded in our navigation map, relying on patch level similarities between our input scene and a training set of previously observed visual scenes. | {
"cite_N": [
"@cite_38",
"@cite_28",
"@cite_29",
"@cite_42",
"@cite_43",
"@cite_20"
],
"mid": [
"",
"1537098388",
"1542723449",
"2171011251",
"2125849446",
""
],
"abstract": [
"",
"When given a single static picture, humans can not only interpret the instantaneous content captured by the image, but also they are able to infer the chain of dynamic events that are likely to happen in the near future. Similarly, when a human observes a short video, it is easy to decide if the event taking place in the video is normal or unexpected, even if the video depicts a an unfamiliar place for the viewer. This is in contrast with work in surveillance and outlier event detection, where the models rely on thousands of hours of video recorded at a single place in order to identify what constitutes an unusual event. In this work we present a simple method to identify videos with unusual events in a large collection of short video clips. The algorithm is inspired by recent approaches in computer vision that rely on large databases. In this work we show how, relying on large collections of videos, we can retrieve other videos similar to the query to build a simple model of the distribution of expected motions for the query. Consequently, the model can evaluate how unusual is the video as well as make event predictions. We show how a very simple retrieval model is able to provide reliable results.",
"This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art non-parametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem.",
"What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"In this paper we propose a novel nonparametric approach for object recognition and scene parsing using dense scene alignment. Given an input image, we retrieve its best matches from a large database with annotated images using our modified, coarse-to-fine SIFT flow algorithm that aligns the structures within two images. Based on the dense scene correspondence obtained from the SIFT flow, our system warps the existing annotations, and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on a challenging database. Compared to existing object recognition approaches that require training for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure.",
""
]
} |
1603.06937 | 2950762923 | This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a "stacked hourglass" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods. | There are several examples of methods making successive predictions for pose estimation. @cite_56 use what they refer to as Iterative Error Feedback. A set of predictions is included with the input, and each pass through the network further refines these predictions. Their method requires multi-stage training and the weights are shared across each iteration. @cite_45 build on the work of multi-stage pose machines @cite_37 but now with the use of ConvNets for feature extraction. Given our use of intermediate supervision, our work is similar in spirit to these methods, but our building block (the hourglass module) is different. Hu & Ramanan @cite_55 have an architecture more similar to ours that can also be used for multiple stages of predictions, but their model ties weights in the bottom-up and top-down portions of computation as well as across iterations. | {
"cite_N": [
"@cite_55",
"@cite_37",
"@cite_45",
"@cite_56"
],
"mid": [
"2022508996",
"2174722029",
"2255781698",
"1537698211"
],
"abstract": [
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"State-of-the-art approaches for articulated human pose estimation are rooted in parts-based graphical models. These models are often restricted to tree-structured representations and simple parametric potentials in order to enable tractable inference. However, these simple dependencies fail to capture all the interactions between body parts. While models with more complex interactions can be defined, learning the parameters of these models remains challenging with intractable or approximate inference. In this paper, instead of performing inference on a learned graphical model, we build upon the inference machine framework and present a method for articulated human pose estimation. Our approach incorporates rich spatial interactions among multiple parts and information across parts of different scales. Additionally, the modular framework of our approach enables both ease of implementation without specialized optimization solvers, and efficient inference. We analyze our approach on two challenging datasets with large pose variation and outperform the state-of-the-art on these benchmarks.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation."
]
} |
1603.06937 | 2950762923 | This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a "stacked hourglass" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods. | build on their work in @cite_20 with a cascade to refine predictions. This serves to increase efficency and reduce memory usage of their method while improving localization performance in the high precision range @cite_33 . One consideration is that for many failure cases a refinement of position within a local window would not offer much improvement since error cases often consist of either occluded or misattributed limbs. For both situations, any further evaluation at a local scale will not improve the prediction. There are variations to the pose estimation problem which include the use of additional features such as depth or motion cues. @cite_34 @cite_46 @cite_38 Also, there is the more challenging task of simultaneous annotation of multiple people @cite_47 @cite_54 . In addition, there is work like that of @cite_43 that performs human part segmentation based on fully convolutional networks @cite_53 . Our work focuses solely on the task of keypoint localization of a single person's pose from an RGB image. | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_33",
"@cite_54",
"@cite_53",
"@cite_43",
"@cite_46",
"@cite_34",
"@cite_20"
],
"mid": [
"",
"2952435733",
"1485037422",
"",
"1903029394",
"2398640840",
"",
"2121969814",
"2952422028"
],
"abstract": [
"",
"This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked \"We Are Family\" Stickmen dataset and obtain significant performance improvements over the best alternative algorithms.",
"This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5 . We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"This paper addresses the problem of human body part segmentation in conventional RGB images, which has several applications in robotics, such as learning from demonstration and human-robot handovers. The proposed solution is based on Convolutional Neural Networks (CNNs). We present a network architecture that assigns each pixel to one of a predefined set of human body part classes, such as head, torso, arms, legs. After initializing weights with a very deep convolutional network for image classification, the network can be trained end-to-end and yields precise class predictions at the original input resolution. Our architecture particularly improves on over-fitting issues in the up-convolutional part of the network. Relying only on RGB rather than RGB-D images also allows us to apply the approach outdoors. The network achieves state-of-the-art performance on the PASCAL Parts dataset. Moreover, we introduce two new part segmentation datasets, the Freiburg sitting people dataset and the Freiburg people in disaster dataset. We also present results obtained with a ground robot and an unmanned aerial vehicle.",
"",
"In this work, we propose a novel and efficient method for articulated human pose estimation in videos using a convolutional network architecture, which incorporates both color and motion features. We propose a new human body pose dataset, FLIC-motion (This dataset can be downloaded from http: cs.nyu.edu ajain accv2014 .), that extends the FLIC dataset [1] with additional motion features. We apply our architecture to this dataset and report significantly better performance than current state-of-the-art pose detection systems.",
"This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques."
]
} |
1603.06503 | 2305533977 | We study the use of greedy feature selection methods for morphosyntactic tagging under a number of different conditions. We compare a static ordering of features to a dynamic ordering based on mutual information statistics, and we apply the techniques to standalone taggers as well as joint systems for tagging and parsing. Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers. We also show that the same techniques can be used to select which morphosyntactic categories to predict in order to maximize syntactic accuracy in a joint system. Our final results represent a substantial improvement of the state of the art for several languages, while at the same time reducing both the number of features and the running time by up to 80 in some cases. | Since morphosyntactc tagging interacts with other tasks such as word segmentation and syntactic parsing, there has been an increasing interest in joint models that integrate tagging with these other tasks. This line of work includes joint tagging and word segmentation @cite_23 , joint tagging and named entity recognition @cite_28 , joint tagging and parsing @cite_29 @cite_17 @cite_20 @cite_18 @cite_1 , and even joint word segmentation, tagging and parsing @cite_25 . These studies often show improved accuracy from joint inference in one or all of the tasks involved. | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_23",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"1541975828",
"8499301",
"",
"2162258831",
"2160097208",
"2250747734",
"2252268838",
"2122179236"
],
"abstract": [
"Most current dependency parsers presuppose that input words have been morphologically disambiguated using a part-of-speech tagger before parsing begins. We present a transition-based system for joint part-of-speech tagging and labeled dependency parsing with non-projective trees. Experimental evaluation on Chinese, Czech, English and German shows consistent improvements in both tagging and parsing accuracy when compared to a pipeline system, which lead to improved state-of-the-art results for all languages.",
"We present a machine learning-based method for jointly labeling POS tags and named entities. This joint labeling is performed by utilizing factor graphs. The variables of part of speech and named entity labels are connected by factors so the tagger jointly determines the best labeling for the two labeling tasks. Using the feature sets of SZTENER and the POS-tagger magyarlanc, we built a model that is able to outperform both of the original taggers.",
"",
"Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence. These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance. Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12 over the best published results so far.",
"For Chinese POS tagging, word segmentation is a preliminary step. To avoid error propagation and improve segmentation by utilizing POS information, segmentation and tagging can be performed simultaneously. A challenge for this joint approach is the large combined search space, which makes efficient decoding very hard. Recent research has explored the integration of segmentation and POS tagging, by decoding under restricted versions of the full combined search space. In this paper, we propose a joint segmentation and POS tagging model that does not impose any hard constraints on the interaction between word and POS information. Fast decoding is achieved by using a novel multiple-beam search algorithm. The system uses a discriminative statistical model, trained using the generalized perceptron algorithm. The joint model gives an error reduction in segmentation accuracy of 14.6 and an error reduction in tagging accuracy of 12.2 , compared to the traditional pipeline approach.",
"We propose the first joint model for word segmentation, POS tagging, and dependency parsing for Chinese. Based on an extension of the incremental joint model for POS tagging and dependency parsing (, 2011), we propose an efficient character-based decoding method that can combine features from state-of-the-art segmentation, POS tagging, and dependency parsing models. We also describe our method to align comparable states in the beam, and how we can combine features of different characteristics in our incremental framework. In experiments using the Chinese Treebank (CTB), we show that the accuracies of the three tasks can be improved significantly over the baseline models, particularly by 0.6 for POS tagging and 2.4 for dependency parsing. We also perform comparison experiments with the partially joint models.",
"We address the problem of joint part-of-speech (POS) tagging and dependency parsing in Chinese. In Chinese, some POS tags are often hard to disambiguate without considering longrange syntactic information. Also, the traditional pipeline approach to POS tagging and dependency parsing may suffer from the problem of error propagation. In this paper, we propose the first incremental approach to the task of joint POS tagging and dependency parsing, which is built upon a shift-reduce parsing framework with dynamic programming. Although the incremental approach encounters difficulties with underspecified POS tags of look-ahead words, we overcome this issue by introducing so-called delayed features. Our joint approach achieved substantial improvements over the pipeline and baseline systems in both POS tagging and dependency parsing task, achieving the new state-of-the-art performance on this joint task.",
"This paper describes an empirical study of high-performance dependency parsers based on a semi-supervised learning approach. We describe an extension of semi-supervised structured conditional models (SS-SCMs) to the dependency parsing problem, whose framework is originally proposed in (Suzuki and Isozaki, 2008). Moreover, we introduce two extensions related to dependency parsing: The first extension is to combine SS-SCMs with another semi-supervised approach, described in (, 2008). The second extension is to apply the approach to second-order parsing models, such as those described in (Carreras, 2007), using a two-stage semi-supervised learning approach. We demonstrate the effectiveness of our proposed methods on dependency parsing experiments using two widely used test collections: the Penn Treebank for English, and the Prague Dependency Tree-bank for Czech. Our best results on test data in the above datasets achieve 93.79 parent-prediction accuracy for English, and 88.05 for Czech."
]
} |
1603.06503 | 2305533977 | We study the use of greedy feature selection methods for morphosyntactic tagging under a number of different conditions. We compare a static ordering of features to a dynamic ordering based on mutual information statistics, and we apply the techniques to standalone taggers as well as joint systems for tagging and parsing. Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers. We also show that the same techniques can be used to select which morphosyntactic categories to predict in order to maximize syntactic accuracy in a joint system. Our final results represent a substantial improvement of the state of the art for several languages, while at the same time reducing both the number of features and the running time by up to 80 in some cases. | Feature selection has been a staple of statistical NLP since its beginnings, notably selection via frequency cut-offs in part-of-speech tagging @cite_11 . Since then efforts have been made to tie feature selection with model optimization. For instance, used greedy forward selection with respect to model log-likelihood to select features for named entity recognition. Sparse priors, such as L1 regularization, are a common feature selection technique that trades off feature sparsity with the model's objective @cite_8 . extended such sparse regularization techniques to allow a model to deselect entire feature templates, potentially saving entire blocks of feature extraction computation. However, current systems still tend to employ millions of features without selection, relying primarily on model regularization to combat overfitting. Selection of morphological attributes has been carried out previously in and selection of features under similar constraints was carried out by . | {
"cite_N": [
"@cite_8",
"@cite_11"
],
"mid": [
"2133737073",
"17986227"
],
"abstract": [
"This paper presents a comparative study of five parameter estimation algorithms on four NLP tasks. Three of the five algorithms are well-known in the computational linguistics community: Maximum Entropy (ME) estimation with L2 regularization, the Averaged Perceptron (AP), and Boosting. We also investigate ME estimation with L1 regularization using a novel optimization algorithm, and BLasso, which is a version of Boosting with Lasso (L1) regularization. We first investigate all of our estimators on two re-ranking tasks: a parse selection task and a language model (LM) adaptation task. Then we apply the best of these estimators to two additional tasks involving conditional sequence models: a Conditional Markov Model (CMM) for part of speech tagging and a Conditional Random Field (CRF) for Chinese word segmentation. Our experiments show that across tasks, three of the estimators — ME estimation with L1 or L2 regularization, and AP — are in a near statistical tie for first place.",
"Linear models have enjoyed great success in structured prediction in NLP. While a lot of progress has been made on efficient training with several loss functions, the problem of endowing learners with a mechanism for feature selection is still unsolved. Common approaches employ ad hoc filtering or L1-regularization; both ignore the structure of the feature space, preventing practicioners from encoding structural prior knowledge. We fill this gap by adopting regularizers that promote structured sparsity, along with efficient algorithms to handle them. Experiments on three tasks (chunking, entity recognition, and dependency parsing) show gains in performance, compactness, and model interpretability."
]
} |
1603.06398 | 2951824707 | Shadows often create unwanted artifacts in photographs, and removing them can be very challenging. Previous shadow removal methods often produce de-shadowed regions that are visually inconsistent with the rest of the image. In this work we propose a fully automatic shadow region harmonization approach that improves the appearance compatibility of the de-shadowed region as typically produced by previous methods. It is based on a shadow-guided patch-based image synthesis approach that reconstructs the shadow region using patches sampled from non-shadowed regions. The result is then refined based on the reconstruction confidence to handle unique image patterns. Many shadow removal results and comparisons are show the effectiveness of our improvement. Quantitative evaluation on a benchmark dataset suggests that our automatic shadow harmonization approach effectively improves upon the state-of-the-art. | is an extensively studied problem and modern approaches are well summarized in recent surveys @cite_21 @cite_6 . Shadow analysis is also closely related to intrinsic image decomposition @cite_4 @cite_19 -- the problem of separating an image into reflectance and illumination components -- though shadow removal focuses on the illumination variation caused by occluded light sources. | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_4",
"@cite_6"
],
"mid": [
"2113404166",
"83478774",
"39428922",
"1976363322"
],
"abstract": [
"The intrinsic image decomposition aims to retrieve “intrinsic” properties of an image, such as shading and reflectance. To make it possible to quantitatively compare different approaches to this problem in realistic settings, we present a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects. For each object, we separate an image of it into three components: Lambertian shading, reflectance, and specularities. We use our dataset to quantitatively compare several existing algorithms; we hope that this dataset will serve as a means for evaluating future work on intrinsic images.",
"Abstract. Shadow detection and removal in real scene images is always a challenging but yet intriguing problem. In contrast with the rapidly expanding and continuous interests on this area, the authors are unaware of any comprehensive surveys on this topic. This paper aimed to give a comprehensive and critical survey of current shadow detection and removal methods. Algorithms are categorized into there sets by their different functions and assumptions about the scenes. A discussion of reasonable evaluation is given at the end of this survey. 1 Introduction Shadows and shadings in images have long been disruptive to computer vision algorithms. They appear as surface features, when in fact they are caused by the interaction between light and objects. This may lead to problems in scene understanding, object segmentation, tracking, recognition, etc. Because of the undesirable effects of shadows on image analysis, much attention was paid to the area of shadow detection and removal over the past decades and covered many specific applications such as traffic surveillance [1, 2], face recognition [3, 4, 5] and image segmentation [6]. In spite of these extensive studies, more researches focus on providing a general method for arbitrary scene images and thereby obtaining “visually pleasing” shadow free images. In contrast with the rapidly expanding interests on shadow removal, no comprehensive survey is reported on this particular topic. A list of recent work on this area is reported in [7] but algorithm details are missed. A survey was conducted by A. Prati [8, 9] on the moving cast shadow detection, which is the part of current interests. This paper aims to give a relatively comprehensive study on the current methods of detecting and removing shadows in both still and moving images. Before going into the detailed algorithms, we first review some different kinds of shadows in natural scenes.",
"We suggest that an appropriate role of early visual processing is to describe a scene in terms of intrinsic (vertical) characteristics -- such as range, orientation, reflectance, and incident illumination -- of the surface element visible at each point in the image. Support for this idea comes from three sources: the obvious utility of intrinsic characteristics for higher-level scene analysis; the apparent ability of humans to determine these characteristics, regardless of viewing conditions or familiarity with the scene; and a theoretical argument that such a description is obtainable, by a noncognitive and nonpurposive process, at least, for simple scene domains. The central problem in recovering intrinsic scene characteristics is that the information is confounded in the original light-intensity image: a single intensity value encodes all the characteristics of the corresponding scene point. Recovery depends on exploiting constraints, derived from assumptions about the nature of the scene and the physics of the imaging process.",
"This paper presents a survey and a comparative evaluation of recent techniques for moving cast shadow detection. We identify shadow removal as a critical step for improving object detection and tracking. The survey covers methods published during the last decade, and places them in a feature-based taxonomy comprised of four categories: chromacity, physical, geometry and textures. A selection of prominent methods across the categories is compared in terms of quantitative performance measures (shadow detection and discrimination rates, colour desaturation) as well as qualitative observations. Furthermore, we propose the use of tracking performance as an unbiased approach for determining the practical usefulness of shadow detection methods. The evaluation indicates that all shadow detection approaches make different contributions and all have individual strength and weaknesses. Out of the selected methods, the geometry-based technique has strict assumptions and is not generalisable to various environments, but it is a straightforward choice when the objects of interest are easy to model and their shadows have different orientation. The chromacity based method is the fastest to implement and run, but it is sensitive to noise and less effective in low saturated scenes. The physical method improves upon the accuracy of the chromacity method by adapting to local shadow models, but fails when the spectral properties of the objects are similar to that of the background. The small-region texture based method is especially robust for pixels whose neighbourhood is textured, but may take longer to implement and is the most computationally expensive. The large-region texture based method produces the most accurate results, but has a significant computational load due to its multiple processing steps."
]
} |
1603.06398 | 2951824707 | Shadows often create unwanted artifacts in photographs, and removing them can be very challenging. Previous shadow removal methods often produce de-shadowed regions that are visually inconsistent with the rest of the image. In this work we propose a fully automatic shadow region harmonization approach that improves the appearance compatibility of the de-shadowed region as typically produced by previous methods. It is based on a shadow-guided patch-based image synthesis approach that reconstructs the shadow region using patches sampled from non-shadowed regions. The result is then refined based on the reconstruction confidence to handle unique image patterns. Many shadow removal results and comparisons are show the effectiveness of our improvement. Quantitative evaluation on a benchmark dataset suggests that our automatic shadow harmonization approach effectively improves upon the state-of-the-art. | Because of the inherently ambiguous nature of shadow detection and removal, many previous approaches require manual specification of the shadow region @cite_13 @cite_17 @cite_8 . Given this input, shadow removal can be posed as a matting or labeling problem. | {
"cite_N": [
"@cite_8",
"@cite_13",
"@cite_17"
],
"mid": [
"2117104107",
"2003145026",
""
],
"abstract": [
"In this article, we develop tools for shadow modification in images where a shadowed region is characterized by soft boundaries with varying sharpness along the shadow edges. Modeling shadow edges presents an interesting challenge because they can vary from infinitely sharp edges for shadows produced by a point light source to extremely soft edges for shadows produced by large area light sources. We propose an entirely image-based shadow editing tool for a single-input image. This technique for modeling, editing, and rendering shadow edges in a photograph or a synthetic image lets users separate the shadow from the rest of the image and make arbitrary adjustments to its position, sharpness, and intensity. These machine-adjustable photographs can offer interactivity that might improve images' expressiveness and help us investigate the influence of boundary sharpness on the perception of object-to-object contact, as well as understand how humans assess shadows to estimate object height above a ground plane",
"In this paper we propose a novel method for detecting and removing shadows from a single image thereby obtaining a high-quality shadow-free image. With minimal user assistance, we first identify shadowed and lit areas on the same surface in the scene using an illumination-invariant distance measure. These areas are used to estimate the parameters of an affine shadow formation model. A novel pyramid-based restoration process is then applied to produce a shadow-free image, while avoiding loss of texture contrast and introduction of noise. Unlike previous approaches, we account for varying shadow intensity inside the shadowed region by processing it from the interior towards the boundaries. Finally, to ensure a seamless transition between the original and the recovered regions we apply image inpainting along a thin border. We demonstrate that our approach produces results that are in most cases superior in quality to those of previous shadow removal methods. We also show that it is possible to easily composite the extracted shadow onto a new background or modify its size and direction in the original image.",
""
]
} |
1603.06398 | 2951824707 | Shadows often create unwanted artifacts in photographs, and removing them can be very challenging. Previous shadow removal methods often produce de-shadowed regions that are visually inconsistent with the rest of the image. In this work we propose a fully automatic shadow region harmonization approach that improves the appearance compatibility of the de-shadowed region as typically produced by previous methods. It is based on a shadow-guided patch-based image synthesis approach that reconstructs the shadow region using patches sampled from non-shadowed regions. The result is then refined based on the reconstruction confidence to handle unique image patterns. Many shadow removal results and comparisons are show the effectiveness of our improvement. Quantitative evaluation on a benchmark dataset suggests that our automatic shadow harmonization approach effectively improves upon the state-of-the-art. | has shown great success in image and video completion and other editing tasks since the introduction of @cite_16 -- a fast approximate method for computing patch-based dense correspondences. PatchMatch has been generalized to support scaling and rotation of patches @cite_5 as well as gain bias of each individual color channels @cite_18 . This family of techniques have been widely used for finding patch correspondence in a rotation and scale-invariant manner, and can also handle differences in illumination conditions. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_16"
],
"mid": [
"1763426478",
"2106505277",
"1993120651"
],
"abstract": [
"PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.",
"This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds. We utilize a new coarse-to-fine scheme in which nearest-neighbor field computations using Generalized PatchMatch [ 2010] are interleaved with fitting a global non-linear parametric color model and aggregating consistent matching regions using locally adaptive constraints. Compared to previous correspondence approaches, our method combines the best of two worlds: It is dense, like optical flow and stereo reconstruction methods, and it is also robust to geometric and photometric variations, like sparse feature matching. We demonstrate the usefulness of our method using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.",
"This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods."
]
} |
1603.06398 | 2951824707 | Shadows often create unwanted artifacts in photographs, and removing them can be very challenging. Previous shadow removal methods often produce de-shadowed regions that are visually inconsistent with the rest of the image. In this work we propose a fully automatic shadow region harmonization approach that improves the appearance compatibility of the de-shadowed region as typically produced by previous methods. It is based on a shadow-guided patch-based image synthesis approach that reconstructs the shadow region using patches sampled from non-shadowed regions. The result is then refined based on the reconstruction confidence to handle unique image patterns. Many shadow removal results and comparisons are show the effectiveness of our improvement. Quantitative evaluation on a benchmark dataset suggests that our automatic shadow harmonization approach effectively improves upon the state-of-the-art. | Existing shadow removal approaches often produce inconsistency artifacts in recovered shadow regions, due to the violation of the simplified shadow models they use; these include both color and texture inconsistencies. Specifically, most approaches cannot model the loss of dynamic range in shadow regions @cite_22 @cite_8 , which leads to inconsistent noise properties and texture characteristics between recovered shadow regions and non-shadow regions, such as the examples in res2 . Pixel-based approaches @cite_0 @cite_22 @cite_3 suffer from inaccuracies in the estimation of the shadow parameters, leading to color shifts or residual shadows in the recovered shadow regions( ia (top)). To correct such artifacts some methods leverage shadow non-shadow region correspondence @cite_13 @cite_17 @cite_14 , or region-based color transfer @cite_9 . However, they are still not robust to complex spatially-varying textures, complex reflection and shading properties. res1 (e) shows an example with a colorful translucent occluder, where the green color can not be eliminated using any existing shadow removal methods. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_13",
"@cite_17"
],
"mid": [
"2131579142",
"2149550213",
"2117104107",
"2126236382",
"2034540741",
"1704999350",
"2003145026",
""
],
"abstract": [
"We present an interactive, robust and high quality method for fast shadow removal. To perform detection we use an on-the-fly learning approach guided by two rough user inputs for the pixels of the shadow and the lit area. From this we derive a fusion image that magnifies shadow boundary intensity change due to illumination variation. After detection, we perform shadow removal by registering the penumbra to a normalised frame which allows us to efficiently estimate non-uniform shadow illumination changes, resulting in accurate and robust removal. We also present the first reliable, validated and multi-scene category ground truth for shadow removal algorithms which overcomes limitations in existing data sets -- such as inconsistencies between shadow and shadow-free images and limited variations of shadows. Using our data, we perform the most thorough comparison of state of the art shadow removal methods to date. Our algorithm outperforms the state of the art, and we supply our P-code and evaluation data and scripts to encourage future open comparisons.",
"A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.",
"In this article, we develop tools for shadow modification in images where a shadowed region is characterized by soft boundaries with varying sharpness along the shadow edges. Modeling shadow edges presents an interesting challenge because they can vary from infinitely sharp edges for shadows produced by a point light source to extremely soft edges for shadows produced by large area light sources. We propose an entirely image-based shadow editing tool for a single-input image. This technique for modeling, editing, and rendering shadow edges in a photograph or a synthetic image lets users separate the shadow from the rest of the image and make arbitrary adjustments to its position, sharpness, and intensity. These machine-adjustable photographs can offer interactivity that might improve images' expressiveness and help us investigate the influence of boundary sharpness on the perception of object-to-object contact, as well as understand how humans assess shadows to estimate object height above a ground plane",
"This paper presents a novel user-aided method for texture-preserving shadow removal from single images which only requires simple user input. Compared with the state-of-the-art, our algorithm addresses limitations in uneven shadow boundary processing and umbra recovery. We first detect an initial shadow boundary by growing a user specified shadow outline on an illumination-sensitive image. Interval-variable intensity sampling is introduced to avoid artefacts raised from uneven boundaries. We extract the initial scale field by applying local group intensity spline fittings around the shadow boundary. Bad intensity samples are replaced by their nearest alternatives based on a log-normal probability distribution of fitting errors. Finally, we use a gradual colour transfer to correct post-processing artefacts such as gamma correction and lossy compression. Compared with state-of-the-art methods, we offer highly user-friendly interaction, produce improved umbra recovery and improved processing given uneven shadow boundaries.",
"",
"Illumination conditions cause problems for many computer vision algorithms. In particular, shadows in an image can cause segmentation, tracking, or recognition algorithms to fail. In this paper we propose a method to process a 3-band colour image to locate, and subsequently remove shadows. The result is a 3-band colour image which contains all the original salient information in the image, except that the shadows are gone.We use the method set out in [1] to derive a 1-d illumination invariant shadow-free image. We then use this invariant image together with the original image to locate shadow edges. By setting these shadow edges to zero in an edge representation of the original image, and by subsequently re-integrating this edge representation by a method paralleling lightness recovery, we are able to arrive at our sought after full colour, shadow free image. Preliminary results reported in the paper show that the method is effective.A caveat for the application of the method is that we must have a calibrated camera. We show in this paper that a good calibration can be achieved simply by recording a sequence of images of a fixed outdoor scene over the course of a day. After calibration, only a single image is required for shadow removal. It is shown that the resulting calibration is close to that achievable using measurements of the camera's sensitivity functions.",
"In this paper we propose a novel method for detecting and removing shadows from a single image thereby obtaining a high-quality shadow-free image. With minimal user assistance, we first identify shadowed and lit areas on the same surface in the scene using an illumination-invariant distance measure. These areas are used to estimate the parameters of an affine shadow formation model. A novel pyramid-based restoration process is then applied to produce a shadow-free image, while avoiding loss of texture contrast and introduction of noise. Unlike previous approaches, we account for varying shadow intensity inside the shadowed region by processing it from the interior towards the boundaries. Finally, to ensure a seamless transition between the original and the recovered regions we apply image inpainting along a thin border. We demonstrate that our approach produces results that are in most cases superior in quality to those of previous shadow removal methods. We also show that it is possible to easily composite the extracted shadow onto a new background or modify its size and direction in the original image.",
""
]
} |
1603.06549 | 2949366051 | Compressed bitmap indexes are used in databases and search engines. Many bitmap compression techniques have been proposed, almost all relying primarily on run-length encoding (RLE). However, on unsorted data, we can get superior performance with a hybrid compression technique that uses both uncompressed bitmaps and packed arrays inside a two-level tree. An instance of this technique, Roaring, has recently been proposed. Due to its good performance, it has been adopted by several production platforms (e.g., Apache Lucene, Apache Spark, Apache Kylin and Druid). Yet there are cases where run-length encoded bitmaps are smaller than the original Roaring bitmaps---typically when the data is sorted so that the bitmaps contain long compressible runs. To better handle these cases, we build a new Roaring hybrid that combines uncompressed bitmaps, packed arrays and RLE compressed segments. The result is a new Roaring format that compresses better. Overall, our new implementation of Roaring can be several times faster (up to two orders of magnitude) than the implementations of traditional RLE-based alternatives (WAH, Concise, EWAH) while compressing better. We review the design choices and optimizations that make these good results possible. | EWAH is similar to WAH except that it uses a marker word that indicates the number of fill words to follow, their type, as well as the number of literal words to follow. Unlike WAH and Concise, which represent the bitmap as a series of @math -bit words, EWAH uses @math -bit words. The EWAH format @cite_21 supports a limited form of skipping because it uses marker words to record the length of the sequences of fill and literal words. For EWAH, if there are long sequences of literal words, one does not need to access them all when seeking data that is further along. @cite_15 @cite_20 found that EWAH offers better speed than WAH and Concise, and our own experiments support this observation (see ). | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_20"
],
"mid": [
"2081858681",
"1966678916",
"2248135252"
],
"abstract": [
"Bitmap indices are widely used for large read-only repositories in data warehouses and scientific databases. Their binary representation allows for the use of bitwise operations and specialized run-length compression techniques. Due to a trade-off between compression and query efficiency, bitmap compression schemes are aligned using a fixed encoding length size (typically the word length) to avoid explicit decompression during query time. In general, smaller encoding lengths provide better compression, but require more decoding during query execution. However, when the difference in size is considerable, it is possible for smaller encodings to also provide better execution time. We posit that a tailored encoding length for each bit vector will provide better performance than a one-size-fits-all approach. We present a framework that optimizes compression and query efficiency by allowing bitmaps to be compressed using variable encoding lengths while still maintaining alignment to avoid explicit decompression. Efficient algorithms are introduced to process queries over bitmaps compressed using different encoding lengths. An input parameter controls the aggressiveness of the compression providing the user with the ability to tune the tradeoff between space and query time. Our empirical study shows this approach achieves significant improvements in terms of both query time and compression ratio for synthetic and real data sets. Compared to 32-bit WAH, VAL-WAH produces up to 1.8× smaller bitmaps and achieves query times that are 30 faster.",
"Bitmap indexes must be compressed to reduce input output costs and minimize CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. These techniques are sensitive to the order of the rows: a simple lexicographical sort can divide the index size by 9 and make indexes several times faster. We investigate row-reordering heuristics. Simply permuting the columns of the table can increase the sorting efficiency by 40 . Secondary contributions include efficient algorithms to construct and aggregate bitmaps. The effect of word length is also reviewed by constructing 16-bit, 32-bit and 64-bit indexes. Using 64-bit CPUs, we find that 64-bit indexes are slightly faster than 32-bit indexes despite being nearly twice as large.",
"Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization."
]
} |
1603.06549 | 2949366051 | Compressed bitmap indexes are used in databases and search engines. Many bitmap compression techniques have been proposed, almost all relying primarily on run-length encoding (RLE). However, on unsorted data, we can get superior performance with a hybrid compression technique that uses both uncompressed bitmaps and packed arrays inside a two-level tree. An instance of this technique, Roaring, has recently been proposed. Due to its good performance, it has been adopted by several production platforms (e.g., Apache Lucene, Apache Spark, Apache Kylin and Druid). Yet there are cases where run-length encoded bitmaps are smaller than the original Roaring bitmaps---typically when the data is sorted so that the bitmaps contain long compressible runs. To better handle these cases, we build a new Roaring hybrid that combines uncompressed bitmaps, packed arrays and RLE compressed segments. The result is a new Roaring format that compresses better. Overall, our new implementation of Roaring can be several times faster (up to two orders of magnitude) than the implementations of traditional RLE-based alternatives (WAH, Concise, EWAH) while compressing better. We review the design choices and optimizations that make these good results possible. | Beside WAH, Concise and EWAH, there are many other similar alternatives. We refer the interested reader to @cite_5 , who review over a dozen compressed bitmap formats. | {
"cite_N": [
"@cite_5"
],
"mid": [
"125841639"
],
"abstract": [
"With the growing popularity of Internet applications and the widespread use of mobile Internet, Internet traffic has maintained rapid growth over the past two decades. Internet Traffic Archival Systems (ITAS) for packets or flow records have become more and more widely used in network monitoring, network troubleshooting, and user behavior and experience analysis. Among the three key technologies in ITAS, we focus on bitmap index compression algorithm and give a detailed survey in this paper. The current state-of-the-art bitmap index encoding schemes include: BBC, WAH, PLWAH, EWAH, PWAH, CONCISE, COMPAX, VLC, DF-WAH, and VAL-WAH. Based on differences in segmentation, chunking, merge compress, and Near Identical (NI) features, we provide a thorough categorization of the state-of-the-art bitmap index compression algorithms. We also propose some new bitmap index encoding algorithms, such as SECOMPAX, ICX, MASC, and PLWAH+, and present the state diagrams for their encoding algorithms. We then evaluate their CPU and GPU implementations with a real Internet trace from CAIDA. Finally, we summarize and discuss the future direction of bitmap index compression algorithms. Beyond the application in network security and network forensic, bitmap index compression with faster bitwise-logical operations and reduced search space is widely used in analysis in genome data, geographical information system, graph databases, image retrieval, Internet of things, etc. It is expected that bitmap index compression will thrive and be prosperous again in Big Data era since 1980s."
]
} |
1603.06744 | 2304240348 | Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks. | While we target widely used programming languages, namely, Java and Python, our work is related to studies on the generation of any executable code. These include generating regular expressions @cite_33 , and the code for parsing input documents @cite_19 . Much research has also been invested in generating formal languages, such as database queries @cite_11 @cite_18 , agent specific language @cite_15 or smart phone instructions @cite_23 . Finally, mapping natural language into a sequence of actions for the generation of executable code @cite_21 . Finally, a considerable effort in this task has focused on semantic parsing @cite_16 @cite_6 @cite_19 @cite_1 @cite_10 . Recently proposed models focus on Combinatory Categorical Grammars @cite_33 @cite_1 , Bayesian Tree Transducers @cite_6 @cite_19 and Probabilistic Context Free Grammars @cite_13 . The work in natural language programming @cite_2 @cite_30 , where users write lines of code from natural language, is also related to our work. Finally, the reverse mapping from code into natural language is explored in @cite_8 . | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2124204950",
"2696813",
"2252136820",
"",
"2242083635",
"2122223050",
"2250623140",
"2147389891",
"2170732969",
"2166847432",
"2251185410",
"2107618763",
"2102258316",
"2158396456",
"2163274265"
],
"abstract": [
"Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance. Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser. In experiments on the multilingual GeoQuery corpus we find that our parser is competitive with the state of the art, and in some cases achieves higher accuracy than recently proposed purpose-built systems. These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation.",
"We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations based on regular expressions. The programs are described with unconstrained natural language instructions, and providing one or more examples of input output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually and, at the same time, provides a more natural interface to the user.",
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.",
"",
"Pseudo-code written in natural language can aid the comprehension of source code in unfamiliar programming languages. However, the great majority of source code has no corresponding pseudo-code, because pseudo-code is redundant and laborious to create. If pseudo-code could be generated automatically and instantly from given source code, we could allow for on-demand production of pseudo-code without human effort. In this paper, we propose a method to automatically generate pseudo-code from source code, specifically adopting the statistical machine translation (SMT) framework. SMT, which was originally designed to translate between two natural languages, allows us to automatically learn the relationship between source code pseudo-code pairs, making it possible to create a pseudo-code generator with less human effort. In experiments, we generated English or Japanese pseudo-code from Python statements using SMT, and find that the generated pseudo-code is largely accurate, and aids code understanding.",
"In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains --- Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples.",
"We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art.",
"Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.",
"We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0 F-Score accuracy compared to an F-Score of 66.7 produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing). 1",
"This paper presents SmartSynth, a novel end-to-end programming system for synthesizing smartphone automation scripts from natural language descriptions. Our approach is unique in two key aspects. First, it involves a carefully designed domain-specific language that incorporates standard constructs from smartphone programming platforms to balance its expressivity and the ability to synthesize scripts from natural language. Second, our synthesis algorithm integrates techniques from two research areas: (1) It infers the set of components and their partial dataflow relations from the natural language description using techniques from the Natural Language Processing community; and (2) It uses techniques from the Program Synthesis community to infer missing dataflow relations via type-based synthesis and constructs scripts in a process akin to reverse parsing. SmartSynth also performs conversational interactions with the user when multiple top-ranked scripts exist or it cannot map part of the description to any component. Evaluated on 50 tasks collected from smartphone help forums, our system produces the intended scripts in real time for over 90 of the 640 natural language descriptions obtained from a user study for those tasks. SmartSynth has also been adapted to TouchDevelop, an end user-targeted programming environment on mobile platforms, with very promising results (see http: www.cs.ucdavis.edu su smartsynth.mp4 for a video demo). We believe that SmartSynth is a step toward fully personalized use of smartphones' increasingly rich functionalities.",
"We argue it is better to program in a natural language such as English, instead of a programming language like Java. A natural language interface for programming should result in greater readability, as well as making possible a more intuitive way of writing code. In contrast to previous controlled language systems, we allow unrestricted syntax, using wide-coverage syntactic and semantic methods to extract information from the user’s instructions. We also look at how people actually give programming instructions in English, collecting and annotating a corpus of such statements. We identify differences between sentences in this corpus and in typical newspaper text, and the effect they have on how we process the natural language input. Finally, we demonstrate a prototype system, that is capable of translating some English instructions into executable code.",
"This paper presents a method for inducing transformation rules that map natural-language sentences into a formal query or command language. The approach assumes a formal grammar for the target representation language and learns transformation rules that exploit the non-terminal symbols in this grammar. The learned transformation rules incrementally map a natural-language sentence or its syntactic parse tree into a parse-tree for the target formal language. Experimental results are presented for two corpora. one which maps English instructions into an existing formal coaching language for simulated RoboCup soccer agents, and another which maps English U.S.-geography questions into a database query language. We show that our method performs overall better and faster than previous approaches in both domains.",
"We present a novel statistical approach to semantic parsing, WASP, for constructing a complete, formal meaning representation of a sentence. A semantic parser is learned given a set of sentences annotated with their correct meaning representations. The main innovation of WASP is its use of state-of-the-art statistical machine translation techniques. A word alignment model is used for lexical acquisition, and the parsing model itself can be seen as a syntax-based translation model. We show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order.",
"Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple “if-then” rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called “recipes”) and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.",
"This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application."
]
} |
1603.06744 | 2304240348 | Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks. | Character-based sequence-to-sequence models have previously been used to generate code from natural language in @cite_14 . Inspired by these works, LPNs provide a richer framework by employing attention models @cite_7 , pointer networks @cite_32 and character-based embeddings @cite_12 . Our formulation can also be seen as a generalization of , who implement a special case where two predictors have the same granularity (a sub-token softmax and a pointer network). Finally, HMMs have been employed in neural models to marginalize over label sequences in @cite_5 @cite_0 by modeling transitions between labels. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_32",
"@cite_0",
"@cite_5",
"@cite_12"
],
"mid": [
"1881604308",
"2133564696",
"",
"",
"2158899491",
"2949563612"
],
"abstract": [
"This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion. We demonstrate its feasibility through a case study and empirical analysis. To fully make such technique useful in practice, we also point out several cross-disciplinary challenges, including modeling user intention, providing datasets, improving model architectures, etc. Although much long-term research shall be addressed in this new field, we believe end-to-end program generation would become a reality in future decades, and we are looking forward to its practice.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish)."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.