aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1612.07833 | 2566441234 | We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing "keywords" (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded "understanding" of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. | In VQA tasks, the aim is to answer an input question correctly with respect to a given input image. In many variations of this task, answers are limited to single words or a binary response ( yes'' or no'') @cite_16 . The Visual7W dataset @cite_31 contains anaswers in a richer format such as phrases, but limits questions to wh-''style (what, where, who, etc). The Visual Genome dataset @cite_24 , on the other hand, can potentially define more complex questions and answers due to its extensive textual annotations. | {
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_16"
],
"mid": [
"2949474740",
"2136462581",
""
],
"abstract": [
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.",
"We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.",
""
]
} |
1612.07833 | 2566441234 | We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing "keywords" (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded "understanding" of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. | The DMC task admits a simple classification-based evaluation metric: the accuracy of selecting the true target. This is a clear advantage over the IC tasks, which often rely on imperfect metrics such as BLEU @cite_8 , ROUGE @cite_28 , METEOR @cite_15 , CIDEr @cite_32 , or SPICE @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_15"
],
"mid": [
"2950201573",
"2101105183",
"2108325777",
"2952574180",
"2123301721"
],
"abstract": [
"There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors?' and can caption-generators count?'",
"Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.",
"In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence n-grams automatically. The second method relaxes strict n-gram matching to skip-bigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency.",
"Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric (CIDEr) that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.",
"We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules."
]
} |
1612.07833 | 2566441234 | We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing "keywords" (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded "understanding" of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. | Related to our proposal is the work in @cite_11 , which frames image captioning as a ranking problem. While both share the idea of selecting captions from a large set, our framework has some important and distinctive components. First, we devise an algorithm for smart selection of candidate decoys, with the goal of selecting those that are sufficiently similar to the true targets to be challenging, and yet still be reliably identifiable by human raters. Second, we have conducted a thorough human evaluation in order to establish a performance ceiling, while also quantifying the level to which current learning systems underperform. Lastly, we show that there exists a positive correlation between the performance on the DMC task and the performance on related vision+language tasks by proposing and experimenting with a multi-task learning model. Our work is also substantially different from their more recent work @cite_14 , where only one decoy is considered and its generation is either random, or focusing on visual concept similarity ( switching people or scenes'') instead of our focus on both linguistic surface and paragraph vector embedding similarity. | {
"cite_N": [
"@cite_14",
"@cite_11"
],
"mid": [
"2512152365",
"68733909"
],
"abstract": [
"Current evaluation metrics for image description may be too coarse. We therefore propose a series of binary forced-choice tasks that each focus on a different aspect of the captions. We evaluate a number of different off-the-shelf image description systems. Our results indicate strengths and shortcomings of both generation and ranking based approaches.",
"The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated."
]
} |
1612.07828 | 2567101557 | With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data. | Many efforts have explored using synthetic data for various prediction tasks, including gaze estimation @cite_24 , text detection and classification in RGB images @cite_23 @cite_18 , font recognition @cite_46 , object detection @cite_37 @cite_27 , hand pose estimation in depth images @cite_34 @cite_12 , scene recognition in RGB-D @cite_26 , semantic segmentation of urban scenes @cite_7 , and human pose estimation @cite_10 @cite_44 @cite_47 @cite_11 @cite_14 @cite_1 . Gaidon al @cite_39 show that pre-training a deep neural network on synthetic data leads to improved performance. Our work is complementary to these approaches, where we improve the realism of the simulator using unlabeled real data. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_26",
"@cite_47",
"@cite_7",
"@cite_14",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_44",
"@cite_27",
"@cite_23",
"@cite_46",
"@cite_34",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2952771913",
"1922126009",
"2283234189",
"2134557905",
"2431874326",
"2073246097",
"2467838519",
"2949907962",
"2299591120",
"2152926413",
"2211115409",
"2952302849",
"2077532029",
"2075156252",
"1950149599",
"",
"2101032778"
],
"abstract": [
"In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains.",
"We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest neighbor methods, support vector machines, and convolutional networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13 for SVM and 7 for convolutional nets. On a segmentation recognition task with highly cluttered images, SVM proved impractical, while convolutional nets yielded 16 7 error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second.",
"Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task.",
"State-of-the-art methods for human detection and pose estimation require many training samples for best performance. While large, manually collected datasets exist, the captured variations w.r.t. appearance, shape and pose are often uncontrolled thus limiting the overall performance. In order to overcome this limitation we propose a new technique to extend an existing training set that allows to explicitly control pose and shape variations. For this we build on recent advances in computer graphics to generate samples with realistic appearance and background while modifying body shape and pose. We validate the effectiveness of our approach on the task of articulated human detection and articulated pose estimation. We report close to state of the art results on the popular Image Parsing [25] human pose estimation benchmark and demonstrate superior performance for articulated human detection. In addition we define a new challenge of combined articulated human detection and pose estimation in real-world scenes.",
"This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms the state of the art in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for in-the-wild images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images.",
"Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learning-by-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, appearance variability, and the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real-time approximations for complex eyeball materials and structures as well as anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework available online for the benefit of the research community.",
"Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.",
"Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.",
"In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-to-end object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.",
"As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers. We study the Visual Font Recognition (VFR) problem [4] LFE, and advance the state-of-the-art remarkably by developing the DeepFont system. First of all, we build up the first available large-scale VFR dataset, named AdobeVFR, consisting of both labeled synthetic data and partially labeled real-world data. Next, to combat the domain mismatch between available training and testing data, we introduce a Convolutional Neural Network (CNN) decomposition approach, using a domain adaptation technique based on a Stacked Convolutional Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. Moreover, we study a novel learning-based model compression approach, in order to reduce the DeepFont model size without sacrificing its performance. The DeepFont system achieves an accuracy of higher than 80 (top-5) on our collected dataset, and also produces a good font similarity measure for font selection and suggestion. We also achieve around 6 times compression of the model without any visible loss of recognition accuracy.",
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
"We address the task of articulated pose estimation from video sequences. We consider an interactive setting where the initial pose is annotated in the first frame. Our system synthesizes a large number of hypothetical scenes with different poses and camera positions by applying geometric deformations to the first frame. We use these synthetic images to generate a custom labeled training set for the video in question. This training data is then used to learn a regressor (for future frames) that predicts joint locations from image data. Notably, our training set is so accurate that nearest-neighbor (NN) matching on low-resolution pixel features works well. As such, we name our underlying representation “tiny synthetic videos”. We present quantitative results the Friends benchmark dataset that suggests our simple approach matches or exceed state-of-the-art.",
"",
"We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m ."
]
} |
1612.07828 | 2567101557 | With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data. | Ganin and Lempitsky @cite_15 use synthetic data in a domain adaptation setting where the learned features are invariant to the domain shift between synthetic and real images. Wang al @cite_46 train a Stacked Convolutional Auto-Encoder on synthetic and real data to learn the lower-level representations of their font detector ConvNet. Zhang al @cite_40 learn a Multichannel Autoencoder to reduce the domain shift between real and synthetic data. In contrast to classical domain adaptation methods that adapt the features with respect to a specific prediction task, we bridge the gap between image distributions through adversarial training. This approach allows us to generate realistic training images which can be used to train any machine learning model, potentially for multiple tasks. | {
"cite_N": [
"@cite_46",
"@cite_40",
"@cite_15"
],
"mid": [
"2077532029",
"2172248380",
"1882958252"
],
"abstract": [
"As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers. We study the Visual Font Recognition (VFR) problem [4] LFE, and advance the state-of-the-art remarkably by developing the DeepFont system. First of all, we build up the first available large-scale VFR dataset, named AdobeVFR, consisting of both labeled synthetic data and partially labeled real-world data. Next, to combat the domain mismatch between available training and testing data, we introduce a Convolutional Neural Network (CNN) decomposition approach, using a domain adaptation technique based on a Stacked Convolutional Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. Moreover, we study a novel learning-based model compression approach, in order to reduce the DeepFont model size without sacrificing its performance. The DeepFont system achieves an accuracy of higher than 80 (top-5) on our collected dataset, and also produces a good font similarity measure for font selection and suggestion. We also achieve around 6 times compression of the model without any visible loss of recognition accuracy.",
"We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets."
]
} |
1612.07828 | 2567101557 | With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data. | Johnson al @cite_25 transfer the style from a set of real images to the synthetic image by co-segmenting and then identifying similar regions. This approach requires users to select the top few matches from an image database. In contrast, we propose an end-to-end solution that does not require user intervention at inference time. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2123576187"
],
"abstract": [
"Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals."
]
} |
1612.07602 | 2585908559 | Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network (CNN) with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR (not relation) for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving state-of-the-art performance. | With the remarkable success of deep learning in CV and NLP @cite_3 , deep learning has been applied to relation extraction @cite_20 @cite_1 @cite_19 @cite_10 , the specific deep learning architecture can be CNN @cite_20 , RNN @cite_5 , etc. propose a piecewise convolutional neural network with multi-instance learning for DS based relation extraction, which improves the precision and recall significantly. Afterwards, introduce the mechanism of attention @cite_12 @cite_13 to select the sentences to relieve the wrong labelling problem and use all the information across sentences. However, the two deep learning based models only make separated extraction thus can not model class ties between relations. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2133564696",
"2251135946",
"",
"2155454737",
"",
"2515462165",
"2949335953",
"1966443646"
],
"abstract": [
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.",
"",
"Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.",
"",
"",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers."
]
} |
1612.07602 | 2585908559 | Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network (CNN) with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR (not relation) for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving state-of-the-art performance. | Deep learning to rank has been widely used in many problems to serve as a classification model. In image retrieval, apply deep semantic ranking for multi-label image retrieval. In text matching, adopt learning to rank combined with deep CNN for short text pairs matching. In traditional supervised relation extraction, design a pairwise loss function based on CNN for single label relation extraction. Based on the advantage of deep learning to rank, we propose pairwise learning to rank (LTR) @cite_8 combined with CNN in our model aiming to jointly extract multiple relations. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2149427297"
],
"abstract": [
"Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank."
]
} |
1612.07215 | 2952481332 | Topic models have been successfully applied in lexicon extraction. However, most previous methods are limited to document-aligned data. In this paper, we try to address two challenges of applying topic models to lexicon extraction in non-parallel data: 1) hard to model the word relationship and 2) noisy seed dictionary. To solve these two challenges, we propose two new bilingual topic models to better capture the semantic information of each word while discriminating the multiple translations in a noisy seed dictionary. We extend the scope of topic models by inverting the roles of "word" and "document". In addition, to solve the problem of noise in seed dictionary, we incorporate the probability of translation selection in our models. Moreover, we also propose an effective measure to evaluate the similarity of words in different languages and select the optimal translation pairs. Experimental results using real world data demonstrate the utility and efficacy of the proposed models. | Lexicon extraction from non-parallel data was pioneered by @cite_11 and @cite_6 . Instead of parallel comparable documents, they use a seed dictionary as the pivots. Generally, this kind of approach can be factorized into two steps: 1, construct a context vector for each word, and 2, compute the context similarities on the basis of pivot words (i.e., seed dictionary entries). A common hypothesis is that a word and its translation tend to occur in similar contexts. Previous research has defined various correlation measures to construct a context vector representation for a word, including tf-idf @cite_6 and pointwise mutual information (PMI) @cite_12 . As for the similarity computation, cosine similarity @cite_6 , non-aligned signatures (NAS) @cite_15 , and Johnson-Shannon divergence @cite_9 , etc. can be used. | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"1973550659",
"2041232209",
"2155983311",
"2020842694",
"2139812240"
],
"abstract": [
"Statistical methods to extract translational equivalents from non-parallel corpora hold the promise of ensuring the required coverage and domain customisation of lexicons as well as accelerating their compilation and maintenance. A challenge for these methods are rare, less common words and expressions, which often have low corpus frequencies. However, it is rare words such as newly introduced terminology and named entities that present the main interest for practical lexical acquisition. In this article, we study possibilities of improving the extraction of low-frequency equivalents from bilingual comparable corpora. Our work is carried out in the general framework which discovers equivalences between words of different languages using similarities between their occurrence patterns found in respective monolingual corpora. We develop a method that aims to compensate for insufficient amounts of corpus evidence on rare words: prior to measuring cross-language similarities, the method uses same-language corpus data to model co-occurrence vectors of rare words by predicting their unseen co-occurrences and smoothing rare, unreliable ones. Our experimental evaluation demonstrates that the proposed method delivers a consistent and significant improvement on the conventional approach to this task.",
"",
"Bilingual lexicons are fundamental resources. Modern automated lexicon generation methods usually require parallel corpora, which are not available for most language pairs. Lexicons can be generated using non-parallel corpora or a pivot language, but such lexicons are noisy. We present an algorithm for generating a high quality lexicon from a noisy one, which only requires an independent corpus for each language. Our algorithm introduces non-aligned signatures (NAS), a cross-lingual word context similarity score that avoids the over-constrained and inefficient nature of alignment-based methods. We use NAS to eliminate incorrect translations from the generated lexicon. We evaluate our method by improving the quality of noisy Spanish-Hebrew lexicons generated from two pivot English lexicons. Our algorithm substantially outperforms other lexicon generation methods.",
"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.",
"Common algorithms for sentence and word-alignment allow the automatic identification of word translations from parallel texts. This study suggests that the identification of word translations should also be possible with non-parallel and even unrelated texts. The method proposed is based on the assumption that there is a correlation between the patterns of word co-occurrences in texts of different languages."
]
} |
1612.06778 | 2949495788 | We present a feature vector formation technique for documents - Sparse Composite Document Vector (SCDV) - which overcomes several shortcomings of the current distributional paragraph vector representations that are widely used for text representation. In SCDV, word embedding's are clustered to capture multiple semantic contexts in which words occur. They are then chained together to form document topic-vectors that can express complex, multi-topic documents. Through extensive experiments on multi-class and multi-label classification tasks, we outperform the previous state-of-the-art method, NTSG (, 2015a). We also show that SCDV embedding's perform well on heterogeneous tasks like Topic Coherence, context-sensitive Learning and Information Retrieval. Moreover, we achieve significant reduction in training and prediction times compared to other representation methods. SCDV achieves best of both worlds - better performance with lower time and space complexity. | These shortcomings are addressed in three novel composite document representations called Topical word embedding (TWE-1,TWE-2 and TWE-3) by @cite_19 . learns word and topic embeddings by considering each topic as a pseudo word and builds the topical word embedding for each word-topic assignment. Here, the interaction between a word and the topic to which it is assigned is not considered. learns a topical word embedding for each word-topic assignment directly, by considering each word- topic pair as a pseudo word. Here, the interaction between a word and its assigned topic is considered but the vocabulary of pseudo-words blows up. For each word and each topic, builds distinct embeddings for the topic and word and concatenates them for each word-topic assignment. Here, the word embeddings are influenced by the corresponding topic embeddings, making words in the same topic less discriminative. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2262907013"
],
"abstract": [
"Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multiprototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods."
]
} |
1612.06778 | 2949495788 | We present a feature vector formation technique for documents - Sparse Composite Document Vector (SCDV) - which overcomes several shortcomings of the current distributional paragraph vector representations that are widely used for text representation. In SCDV, word embedding's are clustered to capture multiple semantic contexts in which words occur. They are then chained together to form document topic-vectors that can express complex, multi-topic documents. Through extensive experiments on multi-class and multi-label classification tasks, we outperform the previous state-of-the-art method, NTSG (, 2015a). We also show that SCDV embedding's perform well on heterogeneous tasks like Topic Coherence, context-sensitive Learning and Information Retrieval. Moreover, we achieve significant reduction in training and prediction times compared to other representation methods. SCDV achieves best of both worlds - better performance with lower time and space complexity. | @cite_19 proposed an architecture called Neural tensor skip-gram model (NTSG-1, NTSG-2, NTSG-3, NTSG-4) , that learns multi-prototype word embeddings and uses a tensor layer to model the interaction of words and topics to capture different senses. @math outperforms other embedding methods like @math on the 20 newsgroup data-set by modeling context-sensitive embeddings in addition to topical-word embeddings. @math @cite_28 builds on @math by jointly learning the latent topic space and context-sensitive word embeddings. All three, @math , @math and @math use @math and suffer from computational issues like large training time, prediction time and storage space. They also embed document vectors in the same space as terms. Other works that harness topic modeling like @math @cite_20 , @math @cite_9 , @math @cite_4 , @math @cite_28 , @math @cite_3 , @math @cite_1 , @cite_12 and @math @cite_13 also suffer from similar issues. | {
"cite_N": [
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2516087440",
"2952207911",
"",
"2204007383",
"2250753706",
"2262907013",
"",
"2346917897",
"2577167923"
],
"abstract": [
"Word embedding maps words into a lowdimensional continuous embedding space by exploiting the local word collocation patterns in a small context window. On the other hand, topic modeling maps documents onto a low-dimensional topic space, by utilizing the global word collocation patterns in the same document. These two types of patterns are complementary. In this paper, we propose a generative topic embedding model to combine the two types of patterns. In our model, topics are represented by embedding vectors, and are shared across documents. The probability of each word is influenced by both its local context and its topic. A variational inference method yields the topic embeddings as well as the topic mixing proportions for each document. Jointly they represent the document in a low-dimensional continuous space. In two document classification tasks, our method performs better than eight existing methods, with fewer features. In addition, we illustrate with an example that our method can generate coherent topics even based on only one document.",
"Topic models have been widely used in discovering latent topics which are shared across documents in text mining. Vector representations, word embeddings and topic embeddings, map words and topics into a low-dimensional and dense real-value vector space, which have obtained high performance in NLP tasks. However, most of the existing models assume the result trained by one of them are perfect correct and used as prior knowledge for improving the other model. Some other models use the information trained from external large corpus to help improving smaller corpus. In this paper, we aim to build such an algorithm framework that makes topic models and vector representations mutually improve each other within the same corpus. An EM-style algorithm framework is employed to iteratively optimize both topic model and vector representations. Experimental results show that our model outperforms state-of-art methods on various NLP tasks.",
"",
"Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability. The experimental results show that Topic2Vec achieves interesting and meaningful results.",
"Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.",
"Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multiprototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.",
"",
"Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents. In this work, we describe lda2vec, a model that learns dense word vectors jointly with Dirichlet-distributed latent document-level mixtures of topic vectors. In contrast to continuous dense document representations, this formulation produces sparse, interpretable document mixtures through a non-negative simplex constraint. Our method is simple to incorporate into existing automatic differentiation frameworks and allows for unsupervised document representations geared for use by scientists while simultaneously learning word vectors and the linear relationships between them.",
""
]
} |
1612.07153 | 2561391210 | Accuracy and efficiency are two key problems in large scale incremental Structure from Motion (SfM). In this paper, we propose a unified framework to divide the image set into clusters suitable for reconstruction as well as find multiple reliable and stable starting points. Image partitioning performs in two steps. First, some small image groups are selected at places with high image density, and then all the images are clustered according to their optimal reconstruction paths to these image groups. This promises that the scene is always reconstructed from dense places to sparse areas, which can reduce error accumulation when images have weak overlap. To enable faster speed, images outside the selected group in each cluster are further divided to achieve a greater degree of parallelism. Experiments show that our method achieves significant speedup, higher accuracy and better completeness. | Large scale structure from motion has witnessed great development in recent years. Several complete structure from motion systems have been proposed. Snavely @cite_6 @cite_10 are among the first to propose a complete incremental SfM pipeline. The backbone of their system is a structure from motion approach which computes the photographers' locations and orientations, along with a sparse 3D point cloud. | {
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2129201358",
"2156598602"
],
"abstract": [
"There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like \"Notre Dame\" or \"Trevi Fountain.\" This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites."
]
} |
1612.07153 | 2561391210 | Accuracy and efficiency are two key problems in large scale incremental Structure from Motion (SfM). In this paper, we propose a unified framework to divide the image set into clusters suitable for reconstruction as well as find multiple reliable and stable starting points. Image partitioning performs in two steps. First, some small image groups are selected at places with high image density, and then all the images are clustered according to their optimal reconstruction paths to these image groups. This promises that the scene is always reconstructed from dense places to sparse areas, which can reduce error accumulation when images have weak overlap. To enable faster speed, images outside the selected group in each cluster are further divided to achieve a greater degree of parallelism. Experiments show that our method achieves significant speedup, higher accuracy and better completeness. | Agarwal @cite_16 designed a system running on a collection of parallel distributed machines to efficiently reconstruct a city. They paid a lot of effort to reduce the cost of scheduling between different tasks. They computed a skelet al set of photographs @cite_8 instead of reconstructing all the images. Frahm @cite_1 improved the work of @cite_16 by reconstructing a city on a single machine with multi-core CPUs and GPUs. They concatenated the global GIST descriptor @cite_19 with a subsampled image. Then the descriptor was compressed to shorter binary code so that it is memory efficiency for GPU computation. They also generated dense 3D model using fast plane sweeping stereo and efficient depth map fusion algorithms. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_1",
"@cite_8"
],
"mid": [
"1566135517",
"2163446794",
"2099443716",
""
],
"abstract": [
"In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.",
"We present a system that can reconstruct 3D geometry from large, unorganized collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo-sharing sites. Our system is built on a set of new, distributed computer vision algorithms for image matching and 3D reconstruction, designed to maximize parallelism at each stage of the pipeline and to scale gracefully with both the size of the problem and the amount of available computation. Our experimental results demonstrate that it is now possible to reconstruct city-scale image collections with more than a hundred thousand images in less than a day.",
"This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\"cloudless\"). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches.",
""
]
} |
1612.07153 | 2561391210 | Accuracy and efficiency are two key problems in large scale incremental Structure from Motion (SfM). In this paper, we propose a unified framework to divide the image set into clusters suitable for reconstruction as well as find multiple reliable and stable starting points. Image partitioning performs in two steps. First, some small image groups are selected at places with high image density, and then all the images are clustered according to their optimal reconstruction paths to these image groups. This promises that the scene is always reconstructed from dense places to sparse areas, which can reduce error accumulation when images have weak overlap. To enable faster speed, images outside the selected group in each cluster are further divided to achieve a greater degree of parallelism. Experiments show that our method achieves significant speedup, higher accuracy and better completeness. | Wu @cite_3 proposed a new SfM framework that has @math time complexity. He used top-scale feature matching to coarsely identify image overlapping, which saved much time in image matching. During reconstruction, his method performed full bundle adjustment optimization after the model increases a certain ratio and partial bundle adjustment on a constant number of recently added cameras to reduce the accumulated time of bundle adjustment. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2105303354"
],
"abstract": [
"The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http: homes.cs.washington.edu ccwu vsfm ."
]
} |
1612.07153 | 2561391210 | Accuracy and efficiency are two key problems in large scale incremental Structure from Motion (SfM). In this paper, we propose a unified framework to divide the image set into clusters suitable for reconstruction as well as find multiple reliable and stable starting points. Image partitioning performs in two steps. First, some small image groups are selected at places with high image density, and then all the images are clustered according to their optimal reconstruction paths to these image groups. This promises that the scene is always reconstructed from dense places to sparse areas, which can reduce error accumulation when images have weak overlap. To enable faster speed, images outside the selected group in each cluster are further divided to achieve a greater degree of parallelism. Experiments show that our method achieves significant speedup, higher accuracy and better completeness. | Heinly @cite_11 advanced the state-of-the-art SfM methods from city-scale modeling to world-scale modeling on a single computer. They also leverage the idea of iconic images to represent small image clusters. The database-side feature augmentation is applied so that an iconic image can cover a broader set of views. For the ability to handle world scale images, their system stores an image's data in memory only when it is needed. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1909224301"
],
"abstract": [
"We propose a novel, large-scale, structure-from-motion framework that advances the state of the art in data scalability from city-scale modeling (millions of images) to world-scale modeling (several tens of millions of images) using just a single computer. The main enabling technology is the use of a streaming-based framework for connected component discovery. Moreover, our system employs an adaptive, online, iconic image clustering approach based on an augmented bag-of-words representation, in order to balance the goals of registration, comprehensiveness, and data compactness. We demonstrate our proposal by operating on a recent publicly available 100 million image crowd-sourced photo collection containing images geographically distributed throughout the entire world. Results illustrate that our streaming-based approach does not compromise model completeness, but achieves unprecedented levels of efficiency and scalability."
]
} |
1612.07122 | 2562993896 | We consider the nonadaptive group testing with @math items, of which @math are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick @math tests uniformly at random with replacement and place the item in those tests. We analyze the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes and show that the performance is consistently improved in comparison with standard Bernoulli designs. We show that our new design requires roughly 23 fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as combinatorial orthogonal matching pursuit and definite defectives (DD). This gives the best known nonadaptive group testing performance for @math and the best proven performance with a practical decoding algorithm for all @math . We also give a converse result showing that the DD algorithm is optimal with respect to our randomized design when @math . We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes. | Kautz and Singleton @cite_3 observed that a construction based on a concatenation of constant-weight codes gives matrices with the so-called @math -disjunctness property (the union of any @math columns does not contain any other column). Such matrices give group testing designs guaranteeing that @math defectives can be recovered with zero probability of error in noiseless group testing (see for example [Chapter 7] du ). However, the group testing designs resulting from the construction of @cite_3 require @math tests. This is an example of the fact that the zero-error criterion requires considerably more tests than the @math required for the error probability tending to zero' criterion (see Definition ) that we study here. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1968160142"
],
"abstract": [
"A binary superimposed code consists of a set of code words whose digit-by-digit Boolean sums (1 + 1 = 1) enjoy a prescribed level of distinguishability. These codes find their main application in the representation of document attributes within an information retrieval system, but might also be used as a basis for channel assignments to relieve congestion in crowded communications bands. In this paper some basic properties of nonrandom codes of this family are presented, and formulas and bounds relating the principal code parameters are derived. Finally, there are described several such code families based upon (1) q -nary conventional error-correcting codes, (2) combinatorial arrangements, such as block designs and Latin squares, (3) a graphical construction, and (4) the parity-check matrices of standard binary error-correcting codes."
]
} |
1612.07122 | 2562993896 | We consider the nonadaptive group testing with @math items, of which @math are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick @math tests uniformly at random with replacement and place the item in those tests. We analyze the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes and show that the performance is consistently improved in comparison with standard Bernoulli designs. We show that our new design requires roughly 23 fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as combinatorial orthogonal matching pursuit and definite defectives (DD). This gives the best known nonadaptive group testing performance for @math and the best proven performance with a practical decoding algorithm for all @math . We also give a converse result showing that the DD algorithm is optimal with respect to our randomized design when @math . We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes. | Similarly, other subsequent papers have proposed forms of constant or near-constant column weight designs @cite_12 @cite_9 @cite_0 @cite_42 , but to our knowledge, none of these works provide non-trivial achievable rates for the vanishing error probability criterion, which is the focus of the present paper. Chan @cite_38 considered constant row weight designs, and found no improvement over Bernoulli designs. | {
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_42",
"@cite_0",
"@cite_12"
],
"mid": [
"2068033507",
"2002503059",
"209134882",
"1986716935",
"1678954343"
],
"abstract": [
"We consider the problem of detecting a small subset of defective items from a large set via non-adaptive “random pooling” group tests. We consider both the case when the measurements are noiseless, and the case2 when the measurements are noisy (the outcome of each group test may be independently faulty with probability q). Order-optimal results for these scenarios are known in the literature. We give information-theoretic lower bounds on the query complexity of these problems, and provide corresponding computationally efficient algorithms that match the lower bounds up to a constant factor. To the best of our knowledge this work is the first to explicitly estimate such a constant that characterizes the gap between the upper and lower bounds for these problems.",
"The authors describe efficient methods for screening clone libraries, based on pooling schemes called open_quotes random k-sets designs. close_quotes In these designs, the pools in which any clone occurs are equally likely to be any possible selection of k from the v pools. The values of k and v can be chosen to optimize desirable properties. Random k-sets designs have substantial advantages over alternative pooling schemes: they are efficient, flexible, and easy to specify, require fewer pools, and have error-correcting and error-detecting capabilities. In addition, screening can often be achieved in only one pass, thus facilitating automation. For design comparison, the authors assume a binomial distribution for the number of open_quotes positive close_quotes clones, with parameters n, the number of clones, and c, the coverage. They propose the expected number of resolved positive clones-clones that are definitely positive based upon the pool assays-as a criterion for the efficiency of a pooling design. They determine the value of k that is optimal, with respect to this criterion, as a function of v, n, and c. They also describe superior k-sets designs called k-sets packing designs. As an illustration, they discuss a robotically implemented design for a 2.5-fold-coverage, human chromosome 16 YAC library of n =more » 1298 clones. They also estimate the probability that each clone is positive, given the pool-assay data and a model for experimental errors. 23 refs., 4 figs« less",
"We use a simple, but nonstandard, incidence relation to construct probabilistic nonadaptive group testing algorithms that identify many positives in a population with a zero probability of yielding a false positive. More importantly, we give two-stage modifications of our nonadaptive algorithms that dramatically reduce the expected number of sufficient pools. For our algorithms, we give a lower bound on the probability of identifying all the positives and we compute the expected number of positives identified. Our method gives control over the pool sizes. In DNA library screening algorithms, where relatively small pools are generally preferred, having control over the pool sizes is an important practical consideration.",
"Random k-set pool designs are known to be efficient but suffer the presence of clones having identical test sets (hence indistinguishable). We propose a random pool design with distinct k-sets. We give an algorithm to generate the design and compute the expected numbers of unresolved negatives and unresolved positives. We show a relation between the two numbers which holds also for other random designs; hence, it can be used to obtain their expected numbers of unresolved positives which are not known. Among these other random designs, the random size-k design is a new design.",
". .. •• • •• • • •• •• . .. . ........ • .. . . . • •• - .. • • • •• .. . . ... • • .... .. ...... • • ••••• • • ... . ... . .. . .. . .. . ..... . .. • . . . ... .. .. • ••• • •• • . .. .. . .... . .... • • • . . . .. . .. .. ... . ... . ......... . • • •• • . . . .. · .... · . • ... . ... .. .. .. . . .... ..... .. ... • • • • • ... . . .. . ••• ..... -... .. . .. - ••••••••••• .. • ••••••• ••••••••••••••••••••"
]
} |
1612.07122 | 2562993896 | We consider the nonadaptive group testing with @math items, of which @math are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick @math tests uniformly at random with replacement and place the item in those tests. We analyze the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes and show that the performance is consistently improved in comparison with standard Bernoulli designs. We show that our new design requires roughly 23 fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as combinatorial orthogonal matching pursuit and definite defectives (DD). This gives the best known nonadaptive group testing performance for @math and the best proven performance with a practical decoding algorithm for all @math . We also give a converse result showing that the DD algorithm is optimal with respect to our randomized design when @math . We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes. | M 'ezard @cite_20 considered randomised designs with both constant row and column weights, and with constant column weights only. The paper used heuristics from statistical physics to suggest that such designs may beat Bernoulli designs. In our notation, they suggest the maximum achievable rate of these constant weight designs may be equal to our converse bound for all @math . (Our Theorem rigorously proves this for @math under our slightly different design.) The work of @cite_20 contains some non-rigorous steps; in particular, they make use of a no short loops' assumption that is only verified for @math and conjectured for @math , while experimentally being shown to fail for smaller values such as @math . | {
"cite_N": [
"@cite_20"
],
"mid": [
"2128073148"
],
"abstract": [
"The problem of Group Testing is to identify defective items out of a set of objects by means of pool queries of the form “Does the pool contain at least a defective?”. The aim is of course to perform detection with the fewest possible queries, a problem which has relevant practical applications in different fields including molecular biology and computer science. Here we study GT in the probabilistic setting focusing on the regime of small defective probability and large number of objects, p→0 and N→∞. We construct and analyze one-stage algorithms for which we establish the occurrence of a non-detection detection phase transition resulting in a sharp threshold, ( M ) , for the number of tests. By optimizing the pool design we construct algorithms whose detection threshold follows the optimal scaling ( M Np| p| ) . Then we consider two-stages algorithms and analyze their performance for different choices of the first stage pools. In particular, via a proper random choice of the pools, we construct algorithms which attain the optimal value (previously determined in (Mezard and Toninelli, arXiv:0706.3104)) for the mean number of tests required for complete detection. We finally discuss the optimal pool design in the case of finite p."
]
} |
1612.07122 | 2562993896 | We consider the nonadaptive group testing with @math items, of which @math are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick @math tests uniformly at random with replacement and place the item in those tests. We analyze the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes and show that the performance is consistently improved in comparison with standard Bernoulli designs. We show that our new design requires roughly 23 fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as combinatorial orthogonal matching pursuit and definite defectives (DD). This gives the best known nonadaptive group testing performance for @math and the best proven performance with a practical decoding algorithm for all @math . We also give a converse result showing that the DD algorithm is optimal with respect to our randomized design when @math . We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes. | D'yachkov @cite_21 studied list decoding with (exactly) constant column weight designs, and setting their list size to @math corresponds to insisting that COMP succeeds. However, they only considered the case that @math . In the limit as @math gets large, the rate @math obtained [Claim 2] dyachkov4 matches the rate for COMP given here in Theorem in the limit @math . | {
"cite_N": [
"@cite_21"
],
"mid": [
"1600887176"
],
"abstract": [
"We say that an s-subset of codewords of a binary code X is sL-bad in X if there exists an L-subset of other codewords in X whose disjunctive sum is covered by the disjunctive sum of the given s codewords. Otherwise, this s-subset of codewords is said to be sL-good in X. A binary code X is said to be a list-decoding disjunctive code of strength s and list size L (an sL-LD code) if it does not contain sL-bad subsets of codewords. We consider a probabilistic generalization of sL-LD codes; namely, we say that a code X is an almost disjunctive sL-LD code if the fraction of sL-good subsets of codewords in X is close to 1. Using the random coding method on the ensemble of binary constant-weight codes, we establish lower bounds on the capacity and error exponent of almost disjunctive sL-LD codes. For this ensemble, the obtained lower bounds are tight and show that the capacity of almost disjunctive sL-LD codes is greater than the zero-error capacity of disjunctive sL-LD codes."
]
} |
1612.07122 | 2562993896 | We consider the nonadaptive group testing with @math items, of which @math are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick @math tests uniformly at random with replacement and place the item in those tests. We analyze the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes and show that the performance is consistently improved in comparison with standard Bernoulli designs. We show that our new design requires roughly 23 fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as combinatorial orthogonal matching pursuit and definite defectives (DD). This gives the best known nonadaptive group testing performance for @math and the best proven performance with a practical decoding algorithm for all @math . We also give a converse result showing that the DD algorithm is optimal with respect to our randomized design when @math . We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes. | A distinct line of works has sought designs that not only require a low number of tests, but also near-optimal decoding complexity (e.g., @math ) @cite_30 @cite_15 @cite_18 @cite_43 . However, our focus in this paper is on the required number of tests, for which the existing guarantees of such algorithms contain loose constants or extra logarithmic factors. | {
"cite_N": [
"@cite_30",
"@cite_43",
"@cite_15",
"@cite_18"
],
"mid": [
"2582049860",
"2002912175",
"2962706095",
"1537002715"
],
"abstract": [
"Group-testing refers to the problem of identifying (with high probability) a (small) subset of @math defectives from a (large) set of @math items via a “small” number of “pooled” tests (i.e., tests that have a positive outcome if at least one of the items being tested in the pool is defective, else have a negative outcome). For ease of presentation in this paper, we focus on the regime when @math for some @math . The tests may be noiseless or noisy , and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of the literature demonstrates that @math tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexities that are sub-linear in @math have started being investigated. In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity ( @math in both the performance metrics). The total number of stages of our adaptive algorithm is “small” ( @math ). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires @math tests and has a decoding complexity of @math . Finally, we present an adaptive algorithm that only requires two stages, and for which both the number of tests and the decoding complexity scale as @math . For all three settings, the probability of error of our algorithms scales as @math . For each of the statements mentioned earlier about the order of the number of measurements, decoding complexity, and probability of error, we provide explicitly computed “small” universal factors in our theorem statements.",
"We consider the following \"efficiently decodable\" non-adaptive group testing problem. There is an unknown string x ∈ 0, 1 n with at most d ones in it. We are allowed to test any subset S ⊆ [n] of the indices. The answer to the test tells whether xi = 0 for all i ∈ S or not. The objective is to design as few tests as possible (say, t tests) such that x can be identified as fast as possible (say, poly(t)-time). Efficiently decodable non-adaptive group testing has applications in many areas, including data stream algorithms and data forensics. A non-adaptive group testing strategy can be represented by a t x n matrix, which is the stacking of all the characteristic vectors of the tests. It is well-known that if this matrix is d-disjunct, then any test outcome corresponds uniquely to an unknown input string. Furthermore, we know how to construct d-disjunct matrices with t = O(d2 log n) efficiently. However, these matrices so far only allow for a \"decoding\" time of O(nt), which can be exponentially larger than poly(t) for relatively small values of d. This paper presents a randomness efficient construction of d-disjunct matrices with t = O(d2 log n) that can be decoded in time poly(d) · t log2 t + O(t2). To the best of our knowledge, this is the first result that achieves an efficient decoding time and matches the best known O(d2 log n) bound on the number of tests. We also derandomize the construction, which results in a polynomial time deterministic construction of such matrices when d = O(log n log log n). A crucial building block in our construction is the notion of (d, l)-list disjunct matrices, which represent the more general \"list group testing\" problem whose goal is to output less than d + l positions in x, including all the (at most d) positions that have a one in them. List disjunct matrices turn out to be interesting objects in their own right and were also considered independently by [Cheraghchi, FCT 2009]. We present connections between list disjunct matrices, expanders, dispersers and disjunct matrices. List disjunct matrices have applications in constructing (d, l)-sparsity separator structures [Ganguly, ISAAC 2008] and in constructing tolerant testers for Reed-Solomon codes in the data stream model.",
"The group testing problem is to identify a population of K defective items from a set of n items by pooling groups of items. The result of a test for a group of items is positive if any of the items in the group is defective and negative otherwise. The goal is to judiciously group subsets of items such that defective items can be reliably recovered using the minimum number of tests, while also having a low-complexity decoder. We describe SAFFRON (Sparse-grAph codes Framework For gROup testiNg), a non-adaptive group testing scheme that recovers at least a (1 − ϵ)-fraction (for any arbitrarily small ϵ > 0) of K defective items with high probability with m = 6C(ϵ)K log 2 n tests, where C(ϵ) is a precisely characterized constant that depends only on o. For instance, it can provably recover at least (1 − 10−6)K defective items with m ≃ 68K log 2 n tests. The computational complexity of the decoding algorithm is O(K log n), which is order-optimal. Further, we describe a systematic methodology to robustify SAFFRON such that it can reliably recover the set of K defective items even in the presence of erroneous or noisy test results. We also propose Singleton-Only-SAFFRON, a variant of SAFFRON, that recovers all the K defective items with m = 2e(1+α)K log K log 2 n tests with probability 1 − O(1 Kα), where α > 0 is a constant. Our key intellectual contribution involves the pioneering use of powerful density-evolution methods of modern coding theory (e.g. sparse-graph codes) for efficient group testing design and performance analysis.",
"An @math m×n matrix @math A with column supports @math Si is k-separable if the disjunctions @math źiźKSi are all distinct over all sets @math K of cardinality k. While a simple counting bound shows that @math m>klog2n k rows are required for a separable matrix to exist, in fact it is necessary for m to be about a factor of k more than this. In this paper, we consider a weaker definition of almost k-separability', which requires that the disjunctions are mostly distinct'. We show using a random construction that these matrices exist with @math m=O(klogn) rows, which is optimal for @math k=O(n1-β). Further, by calculating explicit constants, we show how almost separable matrices give new bounds on the rate of nonadaptive group testing."
]
} |
1612.07222 | 2950179379 | Rank aggregation based on pairwise comparisons over a set of items has a wide range of applications. Although considerable research has been devoted to the development of rank aggregation algorithms, one basic question is how to efficiently collect a large amount of high-quality pairwise comparisons for the ranking purpose. Because of the advent of many crowdsourcing services, a crowd of workers are often hired to conduct pairwise comparisons with a small monetary reward for each pair they compare. Since different workers have different levels of reliability and different pairs have different levels of ambiguity, it is desirable to wisely allocate the limited budget for comparisons among the pairs of items and workers so that the global ranking can be accurately inferred from the comparison results. To this end, we model the active sampling problem in crowdsourced ranking as a Bayesian Markov decision process, which dynamically selects item pairs and workers to improve the ranking accuracy under a budget constraint. We further develop a computationally efficient sampling policy based on knowledge gradient as well as a moment matching technique for posterior approximation. Experimental evaluations on both synthetic and real data show that the proposed policy achieves high ranking accuracy with a lower labeling cost. | The dataset of partial rankings over items can be generated from a variety of sources including crowdsourcing services , online competition games (e.g., Microsoft's TrueSkill system ), and online users' activities such as browsing, clicking and transactions that reveal certain preferences. Learning a global ranking of a large set of items by aggregating a collection of partial rankings preferences has been an active research area for the past ten years (see, e.g., @cite_10 @cite_2 @cite_6 @cite_0 @cite_8 @cite_18 @cite_17 @cite_5 ). However, most work on rank aggregation considers a static estimation problem --- inferring a global ranking based on a pre-existing dataset. The problem we consider here is related to but significantly different from these works because we model crowdsourced ranking as a dynamic procedure where the inference of ranking and collection of data proceed concurrently and influence each other. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2140560120",
"1815702575",
"",
"",
"",
"2103850933",
"2115822807",
"2122350398"
],
"abstract": [
"There has been much interest recently in the problem of rank aggregation from pairwise data. A natural question that arises is: under what sorts of statistical assumptions do various rank aggregation algorithms converge to an 'optimal' ranking? In this paper, we consider this question in a natural setting where pairwise comparisons are drawn randomly and independently from some underlying probability distribution. We first show that, under a 'time-reversibility' or Bradley-Terry-Luce (BTL) condition on the distribution, the rank centrality (PageRank) and least squares (HodgeRank) algorithms both converge to an optimal ranking. Next, we show that a matrix version of the Borda count algorithm, and more surprisingly, an algorithm which performs maximum likelihood estimation under a BTL assumption, both converge to an optimal ranking under a 'low-noise' condition that is strictly more general than BTL. Finally, we propose a new SVM-based algorithm for rank aggregation from pairwise data, and show that this converges to an optimal ranking under an even more general condition that we term 'generalized low-noise'. In all cases, we provide explicit sample complexity bounds for exact recovery of an optimal ranking. Our experiments confirm our theoretical findings and help to shed light on the statistical behavior of various rank aggregation algorithms.",
"There are various parametric models for analyzing pairwise comparison data, including the Bradley-Terry-Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this work, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations.",
"",
"",
"",
"In this paper we present a general treatment of the preference aggregation problem, in which multiple preferences over objects must be combined into a single consensus ranking. We consider two instances of this problem: unsupervised aggregation where no information about a target ranking is available, and supervised aggregation where ground truth preferences are provided. For each problem class we develop novel learning methods that are applicable to a wide range of preference types. Specifically, for unsupervised aggregation we introduce the Multinomial Preference model (MPM) which uses a multinomial generative process to model the observed preferences. For the supervised problem we develop a supervised extension for MPM and then propose two fully supervised models. The first model employs SVD factorization to derive effective item features, transforming the aggregation problems into a learning-to-rank one. The second model aims to eliminate the costly SVD factorization and instantiates a probabilistic CRF framework, deriving unary and pairwise potentials directly from the observed preferences. Using a probabilistic framework allows us to directly optimize the expectation of any target metric, such as NDCG or ERR. All the proposed models operate on pairwise preferences and can thus be applied to a wide range of preference types. We empirically validate the models on rank aggregation and collaborative filtering data sets and demonstrate superior empirical accuracy.",
"This paper is concerned with rank aggregation, which aims to combine multiple input rankings to get a better ranking. A popular approach to rank aggregation is based on probabilistic models on permutations, e.g., the Luce model and the Mallows model. However, these models have their limitations in either poor expressiveness or high computational complexity. To avoid these limitations, in this paper, we propose a new model, which is defined with a coset-permutation distance, and models the generation of a permutation as a stagewise process. We refer to the new model as coset-permutation distance based stagewise (CPS) model. The CPS model has rich expressiveness and can therefore be used in versatile applications, because many different permutation distances can be used to induce the coset-permutation distance. The complexity of the CPS model is low because of the stagewise decomposition of the permutation probability and the efficient computation of most coset-permutation distances. We apply the CPS model to supervised rank aggregation, derive the learning and inference algorithms, and empirically study their effectiveness and efficiency. Experiments on public datasets show that the derived algorithms based on the CPS model can achieve state-of-the-art ranking accuracy, and are much more efficient than previous algorithms.",
"Given a set V of n elements we wish to linearly order them given pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(e-6n log5 n) preference labels for a regret of e times the optimal loss. As a function of n, this is asymptotically better than standard (non-adaptive) learning bounds achievable for the same problem. Our main result takes us a step closer toward settling an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels? To further show the power and practicality of our solution, we analyze a typical test case in which a large margin linear relaxation is used for efficiently solving the simpler learning problems in our decomposition."
]
} |
1612.07086 | 2951916837 | Language Models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a Language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with all the previous words and can model the long-range dependencies of history words, which are critical for image captioning. The effectiveness of our approach is validated on two datasets MS COCO and Flickr30K. Our extensive experimental results show that our method outperforms the vanilla recurrent neural network based language models and is competitive with the state-of-the-art methods. | Rather than representing an image as a single feature vector from the top-layer of CNNs, some researchers have explored the structure of networks to explicitly or implicitly model the correlation between images and descriptions @cite_55 @cite_43 @cite_7 . Xu al @cite_55 incorporate the spatial attention on convolutional features of an image into the encoder-decoder framework through the " and " attention mechanisms. Their work is followed by Yang al @cite_63 whose method introduces a review network to improve the attention mechanism and Liu al @cite_2 whose approach is designed to improve the correctness of visual attention. Moreover, a variational autoencoder for image captioning is developed by Pu al @cite_38 . They use a CNN as the image encoder and use a deep generative deconvolutional network as the decoder together with a Gated Recurrent Unit (GRU) @cite_10 to generate image descriptions. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_55",
"@cite_43",
"@cite_63",
"@cite_2",
"@cite_10"
],
"mid": [
"2951326654",
"",
"2950178297",
"2575842049",
"2964049455",
"2410323755",
"2157331557"
],
"abstract": [
"A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence absence of associated labels captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.",
"",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoder- decoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-of- the-art encoder-decoder systems on the tasks of image captioning and source code captioning.",
"Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision. But despite their popularity, the \"correctness\" of the implicitly-learned attention maps has only been assessed qualitatively by visualization of several examples. In this paper we focus on evaluating and improving the correctness of attention in neural image captioning models. Specifically, we propose a quantitative evaluation metric for the consistency between the generated attention maps and human annotations, using recently released datasets with alignment between regions in images and entities in captions. We then propose novel models with different levels of explicit supervision for learning attention maps during training. The supervision can be strong when alignment between regions and caption entities are available, or weak when only object segments and categories are provided. We show on the popular Flickr30k and COCO datasets that introducing supervision of attention maps during training solidly improves both attention correctness and caption quality, showing the promise of making machine perception more human-like.",
"In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1612.07086 | 2951916837 | Language Models based on recurrent neural networks have dominated recent image caption generation tasks. In this paper, we introduce a Language CNN model which is suitable for statistical language modeling tasks and shows competitive performance in image captioning. In contrast to previous models which predict next word based on one previous word and hidden state, our language CNN is fed with all the previous words and can model the long-range dependencies of history words, which are critical for image captioning. The effectiveness of our approach is validated on two datasets MS COCO and Flickr30K. Our extensive experimental results show that our method outperforms the vanilla recurrent neural network based language models and is competitive with the state-of-the-art methods. | In general, current recurrent neural network based approaches have shown their powerful capability on modeling word sequences @cite_46 @cite_25 . However, the history-summarizing hidden states of RNNs are updated at each time, which render the long-term memory rather difficult @cite_53 @cite_12 . Besides, we argue that current recurrent networks like LSTM are not efficient on modeling the hierarchical structure in word sequences. All of these prompt us to explore a new language model to extract better sentence representation. Considering ConvNets can be stacked to extract hierarchical features over long-range contexts and have received a lot of attention in many tasks @cite_23 , in this paper, we design a language CNN to model words with long-term dependencies through multilayer ConvNets and to model the hierarchical representation through the bottom-up and convolutional architecture. | {
"cite_N": [
"@cite_53",
"@cite_23",
"@cite_46",
"@cite_25",
"@cite_12"
],
"mid": [
"1800356822",
"2200374509",
"2951912364",
"2951805548",
""
],
"abstract": [
"Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to LSTM on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.",
"In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. After the rapid growth in the amount of the annotated data and the recent improvements in the strengths of graphics processor units (GPUs), the research on convolutional neural networks has been emerged swiftly and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
""
]
} |
1612.06950 | 2952907575 | We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics -- natural language captions or other labels -- depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC'16 benchmark, video summarization on the SumMe and TVSum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks. | Video annotation. Significant progress was made in the relatively short time since work on video annotation caption generation began. Early methods such as @cite_57 @cite_1 @cite_13 @cite_21 attempted to cluster captions and videos and applied this for video retrieval. Others @cite_56 @cite_31 @cite_76 generated sentence representations by first identifying semantic video content (e.g., verb, noun, etc.) using classifiers tailored for particular objects and events. They then produce template based sentences. This approach, however, does not scale well, since it requires substantial efforts to provide suitable training data for the classifiers, as well as limits the possible sentences that the model can produce. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_56",
"@cite_57",
"@cite_31",
"@cite_76",
"@cite_13"
],
"mid": [
"2112429522",
"2172776790",
"2142900973",
"2150510224",
"2152984213",
"2251353663",
"2138445151"
],
"abstract": [
"Analysis on click-through data from a very large search engine log shows that users are usually interested in the top-ranked portion of returned search results. Therefore, it is crucial for search engines to achieve high accuracy on the top-ranked documents. While many methods exist for boosting video search performance, they either pay less attention to the above factor or encounter difficulties in practical applications. In this paper, we present a flexible and effective reranking method, called CR-Reranking, to improve the retrieval effectiveness. To offer high accuracy on the top-ranked results, CR-Reranking employs a cross-reference (CR) strategy to fuse multimodal cues. Specifically, multimodal features are first utilized separately to rerank the initial returned results at the cluster level, and then all the ranked clusters from different modalities are cooperatively used to infer the shots with high relevance. Experimental results show that the search quality, especially on the top-ranked results, is improved significantly.",
"The prevalence of video sharing websites brings the explosion of web videos and poses a tough challenge to the web video clustering for their indexing. This paper proposes a flexible multi-modal clustering method for web videos. This method achieves web video representation and similarity measurement by integrating the extracted visual features, semantic features and text features of videos to describe a web video more accurately. With the multi-modal combined similarity as input, the affinity propagation algorithm is employed for the clustering procedure. The clustering method is evaluated by experiments conducted on web video dataset and has a better performance than existing methods.",
"Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches.",
"This paper discusses a new method for automatic discovery and organization of descriptive concepts (labels) within large real-world corpora of user-uploaded multimedia, such as YouTube. com. Conversely, it also provides validation of existing labels, if any. While training, our method does not assume any explicit manual annotation other than the weak labels already available in the form of video title, description, and tags. Prior work related to such auto-annotation assumed that a vocabulary of labels of interest (e. g., indoor, outdoor, city, landscape) is specified a priori. In contrast, the proposed method begins with an empty vocabulary. It analyzes audiovisual features of 25 million YouTube. com videos -- nearly 150 years of video data -- effectively searching for consistent correlation between these features and text metadata. It autonomously extends the label vocabulary as and when it discovers concepts it can reliably identify, eventually leading to a vocabulary with thousands of labels and growing. We believe that this work significantly extends the state of the art in multimedia data mining, discovery, and organization based on the technical merit of the proposed ideas as well as the enormous scale of the mining exercise in a very challenging, unconstrained, noisy domain.",
"We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with \"real-world\" knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61 of the time.",
"This paper integrates techniques in natural language processing and computer vision to improve recognition and description of entities and activities in real-world videos. We propose a strategy for generating textual descriptions of videos by using a factor graph to combine visual detections with language statistics. We use state-of-the-art visual recognition systems to obtain confidences on entities, activities, and scenes present in the video. Our factor graph model combines these detection confidences with probabilistic knowledge mined from text corpora to estimate the most likely subject, verb, object, and place. Results on YouTube videos show that our approach improves both the joint detection of these latent, diverse sentence components and the detection of some individual components when compared to using the vision system alone, as well as over a previous n-gram language-modeling approach. The joint detection allows us to automatically generate more accurate, richer sentential descriptions of videos with a wide array of possible content.",
"The TREC Video Retrieval Evaluation (TRECVID) 2011 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last ten years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. TRECVID is funded by the National Institute of Standards and Technology (NIST) and other US government agencies. Many organizations and individuals worldwide contribute significant time and effort"
]
} |
1612.06950 | 2952907575 | We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics -- natural language captions or other labels -- depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC'16 benchmark, video summarization on the SumMe and TVSum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks. | More recently, and following the success of image annotation systems based on deep networks such as @cite_23 @cite_6 , similar techniques were applied to videos @cite_23 @cite_41 @cite_34 @cite_67 . Whereas image based methods used convolutional neural networks (CNN) for this purpose, application to video involve temporal data, which led to the use of recurrent neural networks (RNN), particularly short-term memory networks (LSTM) @cite_3 . We also use CNN and LSTM models but in fundamentally different ways, as we later explain in Sec. . | {
"cite_N": [
"@cite_67",
"@cite_41",
"@cite_6",
"@cite_3",
"@cite_23",
"@cite_34"
],
"mid": [
"2950307714",
"2952453038",
"2951912364",
"",
"2951183276",
"2950019618"
],
"abstract": [
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"Real-world videos often have complex dynamics; and methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem, we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD)."
]
} |
1612.06950 | 2952907575 | We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics -- natural language captions or other labels -- depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC'16 benchmark, video summarization on the SumMe and TVSum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks. | Unlike video description, LSTM based methods were only considered for summarization very recently @cite_75 . Their use of LSTM is also very different from ours. | {
"cite_N": [
"@cite_75"
],
"mid": [
"2963919999"
],
"abstract": [
"We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the task as a structured prediction problem, our main idea is to use Long Short-Term Memory (LSTM) to model the variable-range temporal dependency among video frames, so as to derive both representative and compact video summaries. The proposed model successfully accounts for the sequential structure crucial to generating meaningful video summaries, leading to state-of-the-art results on two benchmark datasets. In addition to advances in modeling techniques, we introduce a strategy to address the need for a large amount of annotated data for training complex learning approaches to summarization. There, our main idea is to exploit auxiliary annotated video summarization datasets, in spite of their heterogeneity in visual styles and contents. Specifically, we show that domain adaptation techniques can improve learning by reducing the discrepancies in the original datasets’ statistical properties."
]
} |
1612.06950 | 2952907575 | We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics -- natural language captions or other labels -- depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC'16 benchmark, video summarization on the SumMe and TVSum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks. | Temporal action detection. Early work on video action recognition relied on hand crafted space-time features @cite_61 @cite_42 @cite_29 . More recently, deep methods have been proposed @cite_71 @cite_48 @cite_36 , many of which learn deep visual and motion features @cite_73 @cite_30 @cite_58 @cite_33 . Along with the development of stronger methods, larger and more challenging benchmarks were proposed @cite_68 @cite_70 @cite_28 @cite_47 . Most datasets, however, used trimmed, temporally segmented videos, i.e: short clips which contain only a single action. | {
"cite_N": [
"@cite_61",
"@cite_30",
"@cite_33",
"@cite_36",
"@cite_48",
"@cite_29",
"@cite_42",
"@cite_58",
"@cite_70",
"@cite_28",
"@cite_68",
"@cite_71",
"@cite_47",
"@cite_73"
],
"mid": [
"1576762698",
"2952186347",
"1944615693",
"1586730761",
"2308045930",
"2105101328",
"2020163092",
"2952633803",
"",
"2126579184",
"2132195202",
"1983364832",
"24089286",
"2465313502"
],
"abstract": [
"Action Recognition in videos is an active research field that is fueled by an acute need, spanning several application domains. Still, existing systems fall short of the applications' needs in real-world scenarios, where the quality of the video is less than optimal and the viewpoint is uncontrolled and often not static. In this paper, we consider the key elements of motion encoding and focus on capturing local changes in motion directions. In addition, we decouple image edges from motion edges using a suppression mechanism, and compensate for global camera motion by using an especially fitted registration scheme. Combined with a standard bag-of-words technique, our methods achieves state-of-the-art performance in the most recent and challenging benchmarks.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.",
"We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.",
"",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"",
"With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.",
"Understanding human actions in videos has been a central research theme in Computer Vision for decades, and much progress has been achieved over the years. Much of this progress was demonstrated on standard benchmarks used to evaluate novel techniques. These benchmarks and their evolution, provide a unique perspective on the growing capabilities of computerized action recognition systems. They demonstrate just how far machine vision systems have come while also underscore the gap that still remains between existing state-of-the-art performance and the needs of real-world applications. In this paper we provide a comprehensive survey of these benchmarks: from early examples, such as the Weizmann set, to recently presented, contemporary benchmarks. This paper further provides a summary of the results obtained in the last couple of years on the recent ASLAN benchmark, which was designed to reflect the many challenges modern Action Recognition systems are expected to overcome.",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.",
"Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to stateof-the-art results."
]
} |
1612.06950 | 2952907575 | We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics -- natural language captions or other labels -- depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC'16 benchmark, video summarization on the SumMe and TVSum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks. | In order to tackle temporal action detection in untrimmed videos, @cite_74 encode visual features at different temporal resolutions followed by a classifier to obtain classification scores at different time scales. @cite_38 focus instead on a fast method for obtaining action proposals from untrimmed videos, which later can be fed to an action classifier. Instead of using action classifiers, our method relies on matching against a gallery of temporally segmented action clips. | {
"cite_N": [
"@cite_38",
"@cite_74"
],
"mid": [
"2519328139",
"2529163075"
],
"abstract": [
"Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.",
"We investigate the feature design and classification architectures in temporal action localization. This application focuses on detecting and labeling actions in untrimmed videos, which brings more challenge than classifying presegmented videos. The major difficulty for action localization is the uncertainty of action occurrence and utilization of information from different scales. Two innovations are proposed to address this issue. First, we propose a Pyramid of Score Distribution Feature (PSDF) to capture the motion information at multiple resolutions centered at each detection window. This novel feature mitigates the influence of unknown action position and duration, and shows significant performance gain over previous detection approaches. Second, inter-frame consistency is further explored by incorporating PSDF into the state-of-the-art Recurrent Neural Networks, which gives additional performance gain in detecting actions in temporally untrimmed videos. We tested our action localization framework on the THUMOS'15 and MPII Cooking Activities Dataset, both of which show a large performance improvement over previous attempts."
]
} |
1612.06549 | 2950688853 | Neural networks with attention have proven effective for many natural language processing tasks. In this paper, we develop attention mechanisms for uncertainty detection. In particular, we generalize standardly used attention mechanisms by introducing external attention and sequence-preserving attention. These novel architectures differ from standard approaches in that they use external resources to compute attention weights and preserve sequence information. We compare them to other configurations along different dimensions of attention. Our novel architectures set the new state of the art on a Wikipedia benchmark dataset and perform similar to the state-of-the-art model on a biomedical benchmark which uses a large set of linguistic features. | Uncertainty has been extensively studied in linguistics and NLP @cite_25 @cite_19 @cite_37 , including modality @cite_15 @cite_4 @cite_20 and negation @cite_30 @cite_43 . , and conducted cross domain experiments. Domains studied include news @cite_11 , biomedicine @cite_34 , Wikipedia @cite_31 and social media @cite_28 . Corpora such as FactBank @cite_11 are annotated in detail with respect to perspective, level of factuality and polarity. conducted uncertainty detection experiments on a version of FactBank extended by crowd sourcing. In this work, we use CoNLL 2010 shared task data @cite_10 since CoNLL provides larger train test sets and the CoNLL annotation consists of only two labels (certain uncertain) instead of various perspectives and degrees of uncertainty. When using uncertainty detection for information extraction tasks like KB population ( intro ), it is a reasonable first step to consider only two labels. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_31",
"@cite_4",
"@cite_28",
"@cite_19",
"@cite_43",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2032021697",
"",
"2171972263",
"2147638277",
"",
"6169389",
"2115896319",
"",
"1250241298",
"",
"",
"",
""
],
"abstract": [
"This article explores a combination of deep and shallow approaches to the problem of resolving the scope of speculation and negation within a sentence, specifically in the domain of biomedical research literature. The first part of the article focuses on speculation. After first showing how speculation cues can be accurately identified using a very simple classifier informed only by local lexical context, we go on to explore two different syntactic approaches to resolving the in-sentence scopes of these cues. Whereas one uses manually crafted rules operating over dependency structures, the other automatically learns a discriminative ranking function over nodes in constituent trees. We provide an in-depth error analysis and discussion of various linguistic properties characterizing the problem, and show that although both approaches perform well in isolation, even better results can be obtained by combining them, yielding the best published results to date on the CoNLL-2010 Shared Task data. The last part of the article describes how our speculation system is ported to also resolve the scope of negation. With only modest modifications to the initial design, the system obtains state-of-the-art results on this task also.",
"",
"We investigate the automatic detection of sentences containing linguistic hedges using corpus statistics and syntactic patterns. We take Wikipedia as an already annotated corpus using its tagged weasel words which mark sentences and phrases as non-factual. We evaluate the quality of Wikipedia as training data for hedge detection, as well as shallow linguistic features.",
"Natural language understanding depends heavily on assessing veridicality-whether events mentioned in a text are viewed as happening or not-but little consideration is given to this property in current relation and event extraction systems. Furthermore, the work that has been done has generally assumed that veridicality can be captured by lexical semantic properties whereas we show that context and world knowledge play a significant role in shaping veridicality. We extend the FactBank corpus, which contains semantically driven veridicality annotations, with pragmatically informed ones. Our annotations are more complex than the lexical assumption predicts but systematic enough to be included in computational work on textual understanding. They also indicate that veridicality judgments are not always categorical, and should therefore be modeled as distributions. We build a classifier to automatically assign event veridicality distributions based on our new annotations. The classifier relies not only on lexical features like hedges or negations, but also on structural features and approximations of world knowledge, thereby providing a nuanced picture of the diverse factors that shape veridicality. \"All I know is what I read in the papers\" -Will Rogers",
"",
"",
"This article describes the resource-and system-building efforts of an 8-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation), and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. Although the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described here. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.",
"",
"Uncertainty is an important linguistic phenomenon that is relevant in many fields of language processing. In its most general sense, it can be interpreted as lack of information: the hearer or the reader cannot be certain about some pieces of information. Thus, uncertain propositions are those whose truth value or reliability cannot be determined due to lack of information. Distinguishing between factual (i.e. true or false) and uncertain propositions is of primary importance both in linguistics and natural language processing applications. For instance, in information extraction an uncertain piece of information might be of some interest for an end-user as well, but such information must not be confused with factual textual evidence (reliable information) and the two should be kept separated. The main objective of this thesis is to detect uncertainty in English and Hungarian natural language texts. As opposed to earlier studies that focused on specific domains and were English-oriented, we will offer here a comprehensive approach to uncertainty detection, which can be easily adapted to the specific needs of many domains and languages. In our investigations, we will pay attention to create linguistically plausible models of uncertainty that will be exploited in creating manually annotated corpora that will serve as the base for the implementation of our uncertainty detectors for several domains, with the help of supervised machine learning techniques. Furthermore, we will also demonstrate that uncertainty detection can be fruitfully applied in a real-world application, namely, information extraction from clinical discharge summaries.",
"",
"",
"",
""
]
} |
1612.06549 | 2950688853 | Neural networks with attention have proven effective for many natural language processing tasks. In this paper, we develop attention mechanisms for uncertainty detection. In particular, we generalize standardly used attention mechanisms by introducing external attention and sequence-preserving attention. These novel architectures differ from standard approaches in that they use external resources to compute attention weights and preserve sequence information. We compare them to other configurations along different dimensions of attention. Our novel architectures set the new state of the art on a Wikipedia benchmark dataset and perform similar to the state-of-the-art model on a biomedical benchmark which uses a large set of linguistic features. | Several studies showed that CNNs can handle diverse sentence classification tasks, including sentiment analysis @cite_39 @cite_26 , relation classification @cite_42 @cite_35 and paraphrase detection @cite_33 . To our knowledge, we are the first to apply them to uncertainty detection. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_42",
"@cite_39"
],
"mid": [
"2155454737",
"1924770834",
"2211192759",
"",
"2120615054"
],
"abstract": [
"Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.",
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.",
"How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https: github.com yinwenpeng Answer_Selection.",
"",
"The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline."
]
} |
1612.06549 | 2950688853 | Neural networks with attention have proven effective for many natural language processing tasks. In this paper, we develop attention mechanisms for uncertainty detection. In particular, we generalize standardly used attention mechanisms by introducing external attention and sequence-preserving attention. These novel architectures differ from standard approaches in that they use external resources to compute attention weights and preserve sequence information. We compare them to other configurations along different dimensions of attention. Our novel architectures set the new state of the art on a Wikipedia benchmark dataset and perform similar to the state-of-the-art model on a biomedical benchmark which uses a large set of linguistic features. | has been mainly used for recurrent neural networks @cite_36 @cite_13 @cite_14 @cite_29 @cite_7 @cite_16 . We integrate attention into CNNs and show that this is beneficial for uncertainty detection. Few studies in vision integrated attention into CNNs @cite_21 @cite_9 @cite_0 but this has not been used often in NLP so far. Exceptions are , and . used several layers of local and global attention in a complex machine translation model with a large number of parameters. Our reimplementation of their network performed poorly for uncertainty detection (51.51 66.57 on wiki bio); we suspect that the reason is that 's training set was an order of magnitude larger than ours. Our approach makes effective use of a much smaller training set. compared attention based input representations and attention based pooling. Instead, our goal is to keep the convolutional and pooling layers unchanged and combine their strengths with attention. applied a convolutional layer to compute attention weights. In this work, we concentrate on the commonly used feed forward layers for that. Comparing them to other options, such as convolution, is an interesting direction for future work. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_9",
"@cite_0",
"@cite_16",
"@cite_13"
],
"mid": [
"2949615363",
"2133585753",
"2964308564",
"",
"2172010943",
"1928906481",
"2174492417",
"",
"1843891098"
],
"abstract": [
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"We propose Neural Reasoner, a framework for neural network-based reasoning over natural language sentences. Given a question, Neural Reasoner can infer over multiple supporting facts and find an answer to the question in specific forms. Neural Reasoner has 1) a specific interaction-pooling mechanism, allowing it to examine multiple facts, and 2) a deep architecture, allowing it to model the complicated logical relations in reasoning tasks. Assuming no particular structure exists in the question and facts, Neural Reasoner is able to accommodate different types of reasoning and different forms of language expressions. Despite the model complexity, Neural Reasoner can still be trained effectively in an end-to-end manner. Our empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. For example, it improves the accuracy on Path Finding(10K) from 33.4 [6] to over 98 .",
"Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNets feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.",
"Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what).",
"We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions.",
"",
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines."
]
} |
1612.06391 | 2584701505 | In meetings where important decisions get made, what items receive more attention may influence the outcome. We examine how different types of rhetorical (de-)emphasis -- including hedges, superlatives, and contrastive conjunctions -- correlate with what gets revisited later, controlling for item frequency and speaker. Our data consists of transcripts of recurring meetings of the Federal Reserve's Open Market Committee (FOMC), where important aspects of U.S. monetary policy are decided on. Surprisingly, we find that words appearing in the context of hedging, which is usually considered a way to express uncertainty, are more likely to be repeated in subsequent meetings, while strong emphasis indicated by superlatives has a slightly negative effect on word recurrence in subsequent meetings. We also observe interesting patterns in how these effects vary depending on social factors such as status and gender of the speaker. For instance, the positive effects of hedging are more pronounced for female speakers than for male speakers. | Another related line of work is accommodation and linguistic style matching @cite_7 @cite_3 , which study the phenomenon of people matching each other in conversations. Here we attempt to study how subtle presentational and rhetorical (de-)emphasis may influence the reception of a speaker's ideas and evaluate based on content words, in contrast with functional words to capture style. | {
"cite_N": [
"@cite_3",
"@cite_7"
],
"mid": [
"2114030807",
"2951056827"
],
"abstract": [
"Three experiments were conducted to determine the psychometric properties of language in dyadic interactions. Using text-analysis, it was possible to assess the degree to which people coordinate their word use in natural conversations. In Experiments 1 (n = 130) and 2( n = 32), college students interacted in dyadic conversations in laboratory-based private Internet chat rooms. Experiment 3 analyzed the official transcripts of the Watergate tapes involving the dyadic interactions between President Richard Nixon and his aids H. R. Haldeman, John Erlichman, and John Dean. The results of the three studies offer sub- stantial evidence that individuals in dyadic interactions exhibit linguistic style matching (LSM) on both the conversation level as well as on a turn-by-turn level. Furthermore, LSM is unrelated to ratings of the quality of the interaction by both participants and judges. We propose that a coordination-engagement hypothesis is a better description of linguistic behaviors than the coordination-rapport hypothesis that has been proposed in the nonver- bal literature. For years social psychologists have exalted the power of the situa- tion. We comfortably acknowledge that across different situations, with different people, we may act in a range of ways, or even talk using a variety of styles. Aware of this tendency, Gergen (1972) began his explorations of our shifting masks of identity. In writing letters to close friends, he realized that he came across as a \"completely different per- son\" in each letter. \"In one, I was morose, pouring out a philosophy of existential sorrow; in another I was a lusty realist: in a third I was a lighthearted jokester\" (p. 32). Based merely on his word choices, Gergen inadvertently varied his style to adapt to the recipients of his letters. This is a prime demonstration of our inherent knowledge of the mutability of our language with respect to varying social contexts.",
"Conversational participants tend to immediately and unconsciously adapt to each other's language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner's immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don't receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males."
]
} |
1612.06335 | 2607899764 | We consider binary error correcting codes when errors are deletions. A basic challenge concerning deletion codes is determining @math , the zero-rate threshold of adversarial deletions, defined to be the supremum of all @math for which there exists a code family with rate bounded away from 0 capable of correcting a fraction @math of adversarial deletions. A recent construction of deletion-correcting codes [ 17] shows that @math , and the trivial upper bound, @math , is the best known. Perhaps surprisingly, we do not know whether or not @math . In this work, to gain further insight into deletion codes, we explore two related error models: oblivious deletions and online deletions, which are in between random and adversarial deletions in power. In the oblivious model, the channel can inflict an arbitrary pattern of @math deletions, picked without knowledge of the codeword. We prove the existence of binary codes of positive rate that can correct any fraction @math of oblivious deletions, establishing that the associated zero-rate threshold @math equals @math . For online deletions, where the channel decides whether to delete bit @math based only on knowledge of bits @math , define the deterministic zero-rate threshold for online deletions @math to be the supremum of @math for which there exist deterministic codes against an online channel causing @math deletions with low average probability of error. That is, the probability that a randomly chosen codeword is decoded incorrectly is small. We prove @math if and only if @math . | The model of oblivious errors (such as bit flips) has has been studied in the information-theory literature as a particular case of arbitrarily varying channels with state constraints @cite_22 (see the related work section of @cite_20 for more background on this connection). In particular, for the case of bit flips, the capacity against the model of @math oblivious bit flips (for @math ) equals @math , matching the Shannon capacity of the binary symmetric channel that flips each bit independently with probability @math . (This special case was re-proved in @cite_15 by a different simpler random coding argument compared to the original works @cite_17 @cite_13 .) Similarly, the capacity against the model of @math oblivious erasures is @math , matching the Shannon capacity of the binary erasure channel. Explicit codes of rate approaching @math to correct @math oblivious bit flips (in the sense of Theorem , with randomized encoding) were given in @cite_20 . This work also considered computationally bounded noise models, such as channels with bounded memory or with small circuits, and gave optimal rate codes for decoding against those models. These models are more general than oblivious errors, but still not as pessimistic as adversarial noise. | {
"cite_N": [
"@cite_22",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2139194398",
"2076613277",
"2963279554",
"2015204845"
],
"abstract": [
"",
"Let be C= x1,...,xn sub 0,1 n be an [n,N] binary error correcting code (not necessarily linear). Let e 0,1 n be an error vector. A codeword xepsiC is said to be disturbed by the error e if the closest codeword to xopluse is no longer x. Let Ae be the subset of codewords in C that are disturbed by e. In this work, we study the size of Ae in random codes C (i.e., codes in which each codeword is chosen uniformly and independently at random from 0,1 n ). Using recent results of Vu [Random Structures and Algorithms, vol. 20, no. 3, pp. 262-316, 2002] on the concentration of non-Lipschitz functions, we show that |Ae| is strongly concentrated for a wide range of values of N and ||e||. We apply this result in the study of communication channels we refer to as oblivious. Roughly speaking, a channel W(y|x) is said to be oblivious if the error distribution imposed by the channel is independent of the transmitted codeword x. A family of channels Psi is said to be oblivious if every member W of the family is oblivious. In this work, we define oblivious and partially oblivious families of (not necessarily memoryless) channels and analyze their capacity. When considering the capacity of a family of channels Psi, one must address the design of error correcting codes which allow communication under the uncertainty of which channel WepsiPsi is actually used. The oblivious channels we define have connections to arbitrarily varying channels with state constraints.",
"The capacity of an arbitrarily varying channel (AVC) is considered for deterministic codes with the average probability of error criterion and, typically, subject to at state constraint. First, sufficient conditions are provided that enable relatively simple decoding rules such as typicality, maximum mutual information, and minimum distance, to attain capacity. Then the (possibly noisy) OR channels and group adder channels are studied in detail. For the former the capacity is explicitly determined and shown to be attainable by minimum-distance decoding. Next, for a large class of addictive AVCs, in addition to providing an intuitively suggestive simplification of the general AVC capacity formula, it is proven that capacity can be attained by a universal decoding rule. Finally, the effect of random state selections on capacity is studied. The merits and limitations of a previous mutual information game approach are also discussed. >",
"We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process that adds the errors can be described by a sufficiently “simple” circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For two classes of channels, we provide explicit, efficiently encodable decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel. Unique decoding for additive errors: We give the first construction of a polynomial-time encodable decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1 − H(p). These are channels that add an arbitrary error vector e ∈ 0, 1 N of weight at most pN to the transmitted word; the vector e can depend on the code but not on the randomness of the encoder or the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases. List decoding for polynomial-time channels: For every constant c > 0, we construct codes with optimal rate (arbitrarily close to 1 − H(p)) that efficiently recover a short list containing the correct message with high probability for channels describable by circuits of size at most Nc. Our construction is not fully explicit but rather Monte Carlo (we give an algorithm that, with high probability, produces an encoder decoder pair that works for all time Nc channels). We are not aware of any channel models considered in the information theory literature other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list decoding with an impossibility result showing that, in a large range of parameters (p > 1 4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.",
"Random coding theorems are proved for discrete memoryless arbitrarily varying channels (AVCs) with constraints on the transmitted codewords and channel state sequences. Two types of constraints are considered: peak (i.e. required for each n-length sequence almost surely) and average (over the message set or over an ensemble). For peak constraints on the codewords and on the channel state sequences, the AVC is shown to have a (strong) random coding capacity. If the codewords and or the channel state sequences are constrained in the average sense, the AVCs do not possess (strong) capacities; only epsilon -capacities are shown to exist. >"
]
} |
1612.06335 | 2607899764 | We consider binary error correcting codes when errors are deletions. A basic challenge concerning deletion codes is determining @math , the zero-rate threshold of adversarial deletions, defined to be the supremum of all @math for which there exists a code family with rate bounded away from 0 capable of correcting a fraction @math of adversarial deletions. A recent construction of deletion-correcting codes [ 17] shows that @math , and the trivial upper bound, @math , is the best known. Perhaps surprisingly, we do not know whether or not @math . In this work, to gain further insight into deletion codes, we explore two related error models: oblivious deletions and online deletions, which are in between random and adversarial deletions in power. In the oblivious model, the channel can inflict an arbitrary pattern of @math deletions, picked without knowledge of the codeword. We prove the existence of binary codes of positive rate that can correct any fraction @math of oblivious deletions, establishing that the associated zero-rate threshold @math equals @math . For online deletions, where the channel decides whether to delete bit @math based only on knowledge of bits @math , define the deterministic zero-rate threshold for online deletions @math to be the supremum of @math for which there exist deterministic codes against an online channel causing @math deletions with low average probability of error. That is, the probability that a randomly chosen codeword is decoded incorrectly is small. We prove @math if and only if @math . | To the best of our knowledge, ours is the first work to address both oblivious and online deletions. We feel that given the large gaps in our understanding of coding against adversarial deletions, and the potential of taking advantage of less stringent models of deletion exhibited in this work, further study of these models seems timely and important. In particular, for online deletions as well, the zero-rate threshold is between @math and @math . It is an interesting challenge to ascertain if one can take advantage of the online restriction, and push some of the ideas in @cite_7 and this work, to enable decoding a fraction of online deletions approaching @math . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2952657341"
],
"abstract": [
"We consider codes over fixed alphabets against worst-case symbol deletions. For any fixed @math , we construct a family of codes over alphabet of size @math with positive rate, which allow efficient recovery from a worst-case deletion fraction approaching @math . In particular, for binary codes, we are able to recover a fraction of deletions approaching @math . Previously, even non-constructively the largest deletion fraction known to be correctable with positive rate was @math , and around @math for the binary case. Our result pins down the largest fraction of correctable deletions for @math -ary codes as @math , since @math is an upper bound even for the simpler model of erasures where the locations of the missing symbols are known. Closing the gap between @math and @math for the limit of worst-case deletions correctable by binary codes remains a tantalizing open question."
]
} |
1612.06615 | 2953086983 | Robust visual tracking is a challenging computer vision problem, with many real-world applications. Most existing approaches employ hand-crafted appearance features, such as HOG or Color Names. Recently, deep RGB features extracted from convolutional neural networks have been successfully applied for tracking. Despite their success, these features only capture appearance information. On the other hand, motion cues provide discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. This paper presents an investigation of the impact of deep motion features in a tracking-by-detection framework. We further show that hand-crafted, deep RGB, and deep motion features contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly suggest that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone. | Discriminative tracking methods @cite_34 @cite_30 @cite_27 @cite_28 typically train a classifier or regressor for the task of differentiating the target from the background. These methods are often also termed tracking-by-detection approaches since they apply a discriminatively trained detector. Learning is performed online by extracting and labeling samples from the video frames. The training samples are often represented by e.g. raw image patches @cite_29 @cite_7 , image histograms and Haar features @cite_34 , color @cite_36 @cite_30 , or shape features @cite_4 @cite_2 . Lebeda @cite_0 use edge-points and estimate the transformation between each frame to successfully perform tracking on texture-less objects. C ^ e hovin @cite_35 employ a couple-layer visual model, which combine local and global target appearance, to perform robust tracking of objects that have rapid appearance changes. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_36",
"@cite_29",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_34"
],
"mid": [
"182940129",
"2140595412",
"",
"161114242",
"2214352687",
"",
"",
"2099427078",
"",
"",
"2098941887"
],
"abstract": [
"We propose a multi-expert restoration scheme to address the model drift problem in online tracking. In the proposed scheme, a tracker and its historical snapshots constitute an expert ensemble, where the best expert is selected to restore the current tracker when needed based on a minimum entropy criterion, so as to correct undesirable model updates. The base tracker in our formulation exploits an online SVM on a budget algorithm and an explicit feature mapping method for efficient model update and inference. In experiments, our tracking method achieves substantially better overall performance than 32 trackers on a benchmark dataset of 50 video sequences under various evaluation settings. In addition, in experiments with a newly collected dataset of challenging sequences, we show that the proposed multi-expert restoration scheme significantly improves the robustness of our base tracker, especially in scenarios with frequent occlusions and repetitive appearance variations.",
"This paper addresses the problem of tracking objects which undergo rapid and significant appearance changes. We propose a novel coupled-layer visual model that combines the target's global and local appearance by interlacing two layers. The local layer in this model is a set of local patches that geometrically constrain the changes in the target's appearance. This layer probabilistically adapts to the target's geometric deformation, while its structure is updated by removing and adding the local patches. The addition of these patches is constrained by the global layer that probabilistically models the target's global visual properties, such as color, shape, and apparent local motion. The global visual properties are updated during tracking using the stable patches from the local layer. By this coupled constraint paradigm between the adaptation of the global and the local layer, we achieve a more robust tracking through significant appearance changes. We experimentally compare our tracker to 11 state-of-the-art trackers. The experimental results on challenging sequences confirm that our tracker outperforms the related trackers in many cases by having a smaller failure rate as well as better accuracy. Furthermore, the parameter analysis shows that our tracker is stable over a range of parameter values.",
"",
"Recent years have seen greater interest in the use of discriminative classifiers in tracking systems, owing to their success in object detection. They are trained online with samples collected during tracking. Unfortunately, the potentially large number of samples becomes a computational burden, which directly conflicts with real-time requirements. On the other hand, limiting the samples may sacrifice performance. Interestingly, we observed that, as we add more and more samples, the problem acquires circulant structure. Using the well-established theory of Circulant matrices, we provide a link to Fourier analysis that opens up the possibility of extremely fast learning and detection with the Fast Fourier Transform. This can be done in the dual space of kernel machines as fast as with linear classifiers. We derive closed-form solutions for training and detection with several types of kernels, including the popular Gaussian and polynomial kernels. The resulting tracker achieves performance competitive with the state-of-the-art, can be implemented with only a few lines of code and runs at hundreds of frames-per-second. MATLAB code is provided in the paper (see Algorithm 1).",
"Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.",
"",
"",
"Long term tracking of an object, given only a single instance in an initial frame, remains an open problem. We propose a visual tracking algorithm, robust to many of the difficulties which often occur in real-world scenes. Correspondences of edge-based features are used, to overcome the reliance on the texture of the tracked object and improve invariance to lighting. Furthermore we address long-term stability, enabling the tracker to recover from drift and to provide redetection following object disappearance or occlusion. The two-module principle is similar to the successful state-of-the-art long-term TLD tracker, however our approach extends to cases of low-textured objects. Besides reporting our results on the VOT Challenge dataset, we perform two additional experiments. Firstly, results on short-term sequences show the performance of tracking challenging objects which represent failure cases for competing state-of-the-art approaches. Secondly, long sequences are tracked, including one of almost 30000 frames which to our knowledge is the longest tracking sequence reported to date. This tests the re-detection and drift resistance properties of the tracker. All the results are comparable to the state-of-the-art on sequences with textured objects and superior on non-textured objects. The new annotated sequences are made publicly available.",
"",
"",
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance."
]
} |
1612.06524 | 2951413377 | We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self occlusions. (2) Big-data sets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is trivial to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstrate that such methods outperform almost all state-of-the-art 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements. | Most existing work that makes use of deep features tends to formulate the problem as a direct 2D image to 3D pose regression task. Li @cite_5 use deep learning to train a regression model to predict 3D pose directly from images. Tekin @cite_32 integrate spatio-temporal features via an image sequence to learn regression model for 3D pose mapping. We provide both a theoretical and empirical analysis that suggests that 2D pose may be a useful intermediate representation. | {
"cite_N": [
"@cite_5",
"@cite_32"
],
"mid": [
"2293220651",
"2270288817"
],
"abstract": [
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.",
"We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks."
]
} |
1612.06589 | 2950692120 | This paper analyzes customer product-choice behavior based on the recency and frequency of each customer's page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression. | One of the most active areas of clickstream research has been the analysis of online purchasing behavior of customers on e-commerce sites @cite_15 . Moe and Fader @cite_8 proposed a stochastic model for predicting online purchasing conversion rates based on an observed history of visits and purchases. Many other studies used logit and or probit modeling based on various types of explanatory variables to predict online purchasing behavior @cite_21 @cite_17 @cite_27 @cite_40 @cite_10 , whereas @cite_38 applied several classification algorithms to predict customers' buying intentions. However, these studies focused on predicting customer visits that culminated in purchases, and so they did not assign a purchase probability to each product. Although various studies have analyzed online product-choice behavior, most have emphasized more detailed data (e.g., multiplex data @cite_32 , social media profiles @cite_42 , and product reviews and ratings @cite_11 ) rather than clickstream data. In contrast to these studies, the present paper focuses on investigating the relationship between customer PVs and product-choice probabilities based on clickstream data. Our research will be of significant value to most e-commerce sites, for which the analysis of clickstream data is a major challenge. | {
"cite_N": [
"@cite_38",
"@cite_11",
"@cite_8",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_27",
"@cite_40",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2055687847",
"641807840",
"2151987718",
"2125441033",
"115528473",
"1988935054",
"",
"2006550546",
"2048590610",
"1964378344",
"1986684834"
],
"abstract": [
"Nowadays customers would rather buy their needs online than visiting a retail store because of many reasons such as saving time. Therefore, in order to increase efficiency of online shopping websites, many companies have invested in researches toward prediction of users purchases and recommendation systems that may help and motivate a user to buy products that he may be interested in. However, most efforts in this area has been around classification and predictions based on users interests in specific types of products. In this paper, we have studied efficiency of numerous algorithms toward building a classification model to predict the probability of a complete purchase by users only based on their behavior models in the system and regardless of their interest. Therefore, we experimented accuracy of different algorithms and proposed a novel classification model that is able to predict whether a user will be interested in buying a certain set of products that are placed in the online shopping cart or not.",
"Predicting customer purchase behaviour is an interesting and challenging task. In e-commerce context, to tackle the challenge will confront a lot of new problems different from those in traditional business. This study investigates three factors that affect purchasing decision-making of customers in online shopping: the needs of customers, the popularity of products and the preference of the customers. Furthermore, exploiting purchase data and ratings of products in the e-commerce website, we propose methods to quantify the strength of these factors: (1) using associations between products to predict the needs of customers; (2) combining collaborative filtering and a hierarchical Bayesian discrete choice model to learn preference of customers; (3) building a support vector regression based model, called Heat model, to calculate the popularity of products; (4) developing a crowdsourcing approach based experimental platform to generate train set for learning Heat model. Combining these factors, a model, called COREL, is proposed to make purchase behaviour prediction for customers. Submitted a purchased product of a customer, the model can return top n the most possible purchased products of the customer in future. Experiments show that these factors play key roles in predictive model and COREL can greatly outperform the baseline methods.",
"This paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customer's probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for different forms of customer heterogeneity in a very flexible manner. Specifically, we decompose an individual's conversion behavior into two components: one for accumulating visit effects and another for purchasing threshold effects. Each component is allowed to vary across households as well as over time.Visit effects capture the notion that store visits can play different roles in the purchasing process. For example, some visits are motivated by planned purchases, while others are associated with hedonic browsing (akin to window shopping); our model is able to accommodate these (and several other) types of visit-purchase relationships in a logical, parsimonious manner. Thepurchasing threshold captures the psychological resistance to online purchasing that may grow or shrink as a customer gains more experience with the purchasing process at a given website. We test different versions of the model that vary in the complexity of these two key components and also compare our general framework with popular alternatives such as logistic regression. We find that the proposed model offers excellent statistical properties, including its performance in a holdout validation sample, and also provides useful managerial diagnostics about the patterns underlying online buyer behavior.",
"Clickstream data provide information about the sequence of pages or the path viewed by users as they navigate a website. We show how path information can be categorized and modeled using a dynamic multinomial probit model of Web browsing. We estimate this model using data from a major online bookseller. Our results show that the memory component of the model is crucial in accurately predicting a path. In comparison, traditional multinomial probit and first-order Markov models predict paths poorly. These results suggest that paths may reflect a user's goals, which could be helpful in predicting future movements at a website. One potential application of our model is to predict purchase conversion. We find that after only six viewings purchasers can be predicted with more than 40 accuracy, which is much better than the benchmark 7 purchase conversion prediction rate made without path information. This technique could be used to personalize Web designs and product offerings based upon a user's path.",
"In the era of social commerce, users often connect from e-commerce websites to social networking venues such as Facebook and Twitter. However, there have been few efforts on understanding the correlations between users' social media profiles and their e-commerce behaviors. This paper presents a system for predicting a user's purchase behaviors on e-commerce websites from the user's social media profile. We specifically aim at understanding if the user's profile information in a social network (for example Facebook) can be leveraged to predict what categories of products the user will buy from (for example eBay Electronics). The paper provides an extensive analysis on how users' Facebook profile information correlates to purchases on eBay, and analyzes the performance of different feature sets and learning algorithms on the task of purchase behavior prediction.",
"In the customer-centered marketplace, the understanding of customer behavior is a critical success factor. The big databases in an organization usually involve multiplex data such as static, time series, symbolic sequential and textual data which are separately stored in different databases of different sections. It poses a challenge to traditional centralized customer behavior prediction. In this study, a novel approach called collaborative multiple kernel support vector machine (C-MK-SVM) is developed for distributed customer behavior prediction using multiplex data. The alternating direction method of multipliers (ADMM) is used for the global optimization of the distributed sub-models in C-MK-SVM. Computational experiments on a practical retail dataset are reported. Computational results show that C-MK-SVM exhibits better customer behavior prediction performance and higher computational speed than support vector machine and multiple kernel support vector machine.",
"",
"Abstract The authors develop and estimate a model of online buying using clickstream data from a Web site that sells cars. The model predicts online buying by linking the purchase decision to what visitors do and to what they are exposed to while at the site. To overcome the challenges of predicting Internet buying, the authors decompose the purchase process into the completion of sequential nominal user tasks and account for heterogeneity across visitors at the county level. Using a sequence of binary probits, the authors model the visitor’s decision of whether to complete each task for the first time, given that the visitor has completed the previous tasks at least once. The results indicate that visitors’ browsing experiences and navigational behavior predict task completion for all decision levels. The results also indicate that the number of repeat visits per se is not diagnostic of buying propensity and that a site’s offering of sophisticated decision aids does not guarantee increased conversion rat...",
"Clickstream data are defined as the electronic record of Internet usage collected by Web servers or third-party services. The authors discuss the nature of clickstream data, noting key strengths and limitations of these data for research in marketing. The paper reviews major developments from the analysis of these data, covering advances in understanding (1) browsing and site usage behavior on the Internet, (2) the Internet's role and efficacy as a new medium for advertising and persuasion, and (3) shopping behavior on the Internet (i.e., electronic commerce). The authors outline opportunities for new research and highlight several emerging areas likely to grow in future importance. Inherent limitations of clickstream data for understanding and predicting the behavior of Internet users or researching marketing phenomena are also discussed.",
"This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting online-purchasing behaviour: (1) general clickstream behaviour at the level of the visit, (2) more detailed clickstream information, (3) customer demographics, and (4) historical purchase behaviour. The results show that predictors from all four categories are retained in the final (best subset) solution indicating that clickstream behaviour is important when determining the tendency to buy. We clearly indicate the contribution in predictive power of variables that were never used before in online purchasing studies. Detailed clickstream variables are the most important ones in classifying customers according to their online purchase behaviour. In doing so, we are able to highlight the advantage of e-commerce retailers of being able to capture an elaborate list of customer information.",
"Social shopping communities (SSCs) evolve from a linkage of social networking and online shopping. Apart from direct shopping features in shopbots (e.g., search fields), SSCs additionally offer user-generated social shopping features. These include recommendation lists, ratings, styles (i.e., assortments arranged by users), tags, and user profiles. Purchases can be made by following a link to a participating online shop (\"click-out\"). SSCs are experiencing high growth rates in consumer popularity (e.g., Polyvore attracts more than 6 million unique visitors per month). Thus, this business model has received considerable venture capital in recent years. By analyzing clickstream data, we investigate which factors, especially social shopping features, are significant for predicting purchasing behavior within SSCs. Our logit model includes about 2.73 million visiting sessions and shows that social shopping features exert a significant impact, both positive and negative. Tags and high ratings have a positive impact on a click-out. In contrast, the more lists and styles used, the less likely the user is to make a click-out. Yet, lists and styles seem to enhance site stickiness and browsing. Moreover, the more direct shopping features that are used, the less likely the user is to conduct a click-out. Increasing transaction costs and information overload could be potential reasons. We also found that community members are more likely to make a click-out than ordinary users. This implies that community members are more profitable."
]
} |
1612.06589 | 2950692120 | This paper analyzes customer product-choice behavior based on the recency and frequency of each customer's page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression. | Shape-restricted regression, which has its origin in earlier work @cite_5 @cite_24 @cite_33 , fits a nonparametric function to given data points under shape restrictions such as monotonicity, convexity, and concavity. Some popular examples include the estimation of utility production cost profit functions in economics @cite_34 @cite_41 and option pricing functions in finance @cite_6 . Various algorithms have been developed for shape-restricted regression @cite_19 @cite_25 @cite_37 @cite_39 @cite_0 @cite_7 , a special case of which is isotonic regression. This has many applications in statistics, operations research, and image processing @cite_13 . @cite_23 recently used the maximum likelihood method to estimate product-choice probabilities subject to the monotonicity, convexity, and concavity constraints with respect to recency and frequency. Their shape-restricted model was a new effective application of shape-restricted regression to the analysis of clickstream data. | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_7",
"@cite_41",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_25"
],
"mid": [
"2494296723",
"2079313454",
"1967819130",
"2064922317",
"2151447307",
"2086688853",
"1975116882",
"2033062730",
"2795929191",
"2257459631",
"2127441697",
"",
"2006560229",
"1995237252"
],
"abstract": [
"In some statistical non-parametric models the mean of the random variable y has to satisfy specific constraints. We consider the case where the set defined by the constraints is a closed polyhedral cone K in Rk. For example, when the mean is required to be concave in x, the set of acceptable means is a closed convex cone defined by k-2 linear inequalities in Rk. The least squares estimate of the mean is then the projection of the data point y on the cone K. In this paper, we present an algorithm to find the least square estimate of the mean in a finite number of steps. Other algorithms to solve this problem have been given before. The successive approximations in such algorithms are usually points on the faces of K. The solution here is reached by following a fixed line joining an abitrary but suitably chosen initial point y0 to the data point y. The 1- dimensional subspace spanned by the generators of the cone K is divided into 2' regions which can be described as the set of points with non-negative coordinates in mixed primal-dual bases relative to the cone K. Any point y belongs to one and only one of these regions SJ with corresponding basis 3j. The projection of y on K is immediately obtained from the expression of y in Oj by dropping the dual component of y.",
"Abstract A method is developed for obtaining maximum likelihood estimates of points on a surface of unspecified algebraic form when ordinates of the points are required to satisfy a set of linear inequalities. A production function with one variable input is considered in some detail. In this case the restrictions follow from the assumption of non-increasing returns. An illustrative computation is worked out using a procedure based on equivalence between the estimation problem and a certain saddle point problem. Alternative procedures for production functions with two variable inputs are sketched.",
"The objective of this article is to develop a computationally efficient estimator of the regression function subject to various shape constraints. In particular, nonparametric estimators of monotone and or convex (concave) regression functions are obtained by using a nested sequence of Bernstein polynomials. One of the key distinguishing features of the proposed estimator is that a given shape constraint (e.g., monotonicity and or convexity) is maintained for any finite sample size and satisfied over the entire support of the predictor space. Moreover, it is shown that the Bernstein polynomial based regression estimator can be obtained as a solution of a constrained least squares method and hence the estimator can be computed efficiently using a quadratic programming algorithm. Finally, the asymptotic properties (e.g., strong uniform consistency) of the estimator are established under very mild conditions, and finite sample properties are explored using several simulation studies and real data analysis. The predictive performances are compared with some of the existing methods.",
"Empirical economists using flexible functional forms often face the disturbing choice of drawing inferences from an approximation violating properties dictated by theory or imposing global restrictions that greatly restrict the flexibility of the functional form. Focusing on the cost function, this paper presents an alternative approach which imposes monotonicity and concavity properties only over the set of prices where inferences will be drawn. An application investigating elasticities for Berndt-Wood data set using the translog, generalized Leontief, and symmetric generalized McFadden flexible functional forms illustrates the technique. Copyright 1996 by John Wiley & Sons, Ltd.",
"Frequently, economic theory places shape restrictions on functional relationships between economic variables. This paper develops a method to constrain the values of the first and second derivatives of nonparametric locally polynomial estimators. We apply this technique to estimate the state price density (SPD), or risk-neutral density, implicit in the market prices of options. The option pricing function must be monotonic and convex. Simulations demonstrate that nonparametric estimates can be quite feasible in the small samples relevant for day-to-day option pricing, once appropriate theory-motivated shape restrictions are imposed. Using S&P500 option prices, we show that unconstrained nonparametric estimators violate the constraints during more than half the trading days in 1999, unlike the constrained estimator we propose.",
"The additive isotonic least-squares regression model has been fit using a sequential pooled adjacent violators algorithm, estimating each isotonic component in turn, and looping until convergence. However, the individual components are not, in general, estimable. The sum of the components, i.e. the expected value of the response, has a unique estimate, which can be found using a single cone projection. Estimators for the individual components are then easily obtained, which are unique if the conditions for estimability hold. Parametrically modelled covariates are easily included in the cone projection specification. The cone structure also provides information about the degrees of freedom of the fit, which can be used in inference methods, variable selection, and estimation of the model variance. Simulations show that these methods can compare favourably to standard parametric methods, even when the parametric assumptions are correct. The estimation and inference methods can be extended to other constraints such as convex regression or isotonic regression on partial orderings.",
"Abstract 2.1. Limitations of the parametric methods. In the previous sections we have studied the efficiency of various methods of estimating the force of mortality. The most efficient of these is, at least for large samples, the one given by the maximum likelihood method, and the rest of them have to be compared to this best estimate. However in this discussion the notion of efficiency is based on the assumption that the mortality intensity can be expressed by Makeham’s formula. How realistic is this in practice?",
"",
"",
"This paper investigates the relationship between customers' page views (PVs) and the probabilities of their product choices on e-commerce sites. For this purpose, we create a probability table consisting of product-choice probabilities for all recency and frequency combinations of each customers' previous PVs. To reduce the estimation error when there are few training samples, we develop optimization models for estimating the product-choice probabilities that satisfy monotonicity, convexity and concavity constraints with respect to recency and frequency. Computational results demonstrate that our method has clear advantages over logistic regression and kernel-based support vector machine.",
"",
"",
"The isotonic regression problem has applications in statistics, operations research, and image processing. In this paper a general framework for the isotonic regression algorithm is proposed. Under this framework, we discuss the isotonic regression problem in the case where the directed graph specifying the order restriction is a directed tree with n vertices. A new algorithm is presented for this case, which can be regarded as a generalization of the PAV algorithm of Using a simple tree structure such as the binomial heap, the algorithm can be implemented in O(n log n) time, improving the previously best known O(n 2 ) time algorithm. We also present linear time algorithms for special cases where the directed graph is a path or a star.",
"Abstract We consider the efficient estimation in the semiparametric additive isotonic regression model where each additive nonparametric component is assumed to be a monotone function. We show that the least-square estimator of the finite-dimensional regression coefficient is root- n consistent and asymptotically normal. Moreover, the isotonic estimator of each additive functional component is proved to have the oracle property, which means the additive component can be estimated with the highest asymptotic accuracy as if the other components were known. A fast algorithm is developed by iterating between a cyclic pool adjacent violators procedure and solving a standard ordinary least squares problem. Simulations are used to illustrate the performance of the proposed procedure and verify the oracle property."
]
} |
1612.06589 | 2950692120 | This paper analyzes customer product-choice behavior based on the recency and frequency of each customer's page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression. | Latent-class (or mixture) regression @cite_14 @cite_1 is a traditional form of latent-class modeling @cite_2 @cite_36 . It is aimed at classifying a sample into latent classes and simultaneously forming a regression model within these classes. There are two main algorithms for maximum likelihood estimation of latent-class regression models: the Newton--Raphson algorithm @cite_28 and the EM algorithm @cite_35 @cite_30 . While these models are frequently used in marketing and business research to represent consumer heterogeneity, we apply latent-class modeling to the classification of a wide variety of products on e-commerce sites. For this purpose, we develop a latent-class model for estimating the product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints. To the best of our knowledge, none of the existing studies have considered a latent-class model under such a variety of shape restrictions. We also demonstrate that our latent-class model is superior to latent-class logistic regression @cite_31 @cite_26 @cite_20 in terms of predictive performance based on clickstream data. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_36",
"@cite_28",
"@cite_1",
"@cite_2",
"@cite_31",
"@cite_20"
],
"mid": [
"2117853077",
"2049633694",
"2147953360",
"2799202302",
"",
"",
"1604544535",
"",
"2070120478",
"1977312174"
],
"abstract": [
"The first unified account of the theory, methodology, and applications of the EM algorithm and its extensionsSince its inception in 1977, the Expectation-Maximization (EM) algorithm has been the subject of intense scrutiny, dozens of applications, numerous extensions, and thousands of publications. The algorithm and its extensions are now standard tools applied to incomplete data problems in virtually every field in which statistical methods are used. Until now, however, no single source offered a complete and unified treatment of the subject.The EM Algorithm and Extensions describes the formulation of the EM algorithm, details its methodology, discusses its implementation, and illustrates applications in many statistical contexts. Employing numerous examples, Geoffrey McLachlan and Thriyambakam Krishnan examine applications both in evidently incomplete data situations-where data are missing, distributions are truncated, or observations are censored or grouped-and in a broad variety of situations in which incompleteness is neither natural nor evident. They point out the algorithm's shortcomings and explain how these are addressed in the various extensions.Areas of application discussed include: Regression Medical imaging Categorical data analysis Finite mixture analysis Factor analysis Robust statistical modeling Variance-components estimation Survival analysis Repeated-measures designs For theoreticians, practitioners, and graduate students in statistics as well as researchers in the social and physical sciences, The EM Algorithm and Extensions opens the door to the tremendous potential of this remarkably versatile statistical tool.",
"",
"Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000-2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.",
"",
"",
"",
"Preface Jacques A. Hagenaars and Allan L. McCutcheon Part I. Introduction: 1. Latent class analysis Leo A. Goodman 2. Basic concepts and procedures in singe- and multiple-group latent class analysis Allan L. McCutcheon Part II. Classification and Measurement: 3. Latent class cluster analysis Jeroen K. Vermunt and Jay Magidson 4. Some examples of latent budget analysis and its extensions Peter G. M. van der Heijden, L. Andries van der Ark and Ab Mooijaart 5. Ordering the classes Marcel Croon 6. Comparison and choice Ulf Bockenholt 7. Three-parameter linear logistic latent class analysis Anton K. Formann and Thomas Kohlmann Part III. 8. Use of categorical and continuous covariates in latent class analysis C. Mitchell Dayton and George B. Macready 9. Directed loglinear modelling with latent variables Jacques A. Hagenaars 10. Latent class models for longitudinal data Linda M. Collins and Brian P. Flaherty 11. Latent markov chains Rolf Langeheine and Frank van de Pol Part IV. Unobserved heterogeneity and non-response: 12. A latent class approach to measuring the fit of a statistical model Tamas Rudas 13. Mixture regression models Michael Wedel and Wayne S. DeSarbo 14. A general latent class approach to unobserved heterogeneity in the analysis of event history data Jeroen K. Vermunt 15. Latent class models for contingency tables with missing data Christopher Winship, Robert D. Mare and John Robert Warren Appendices Index.",
"",
"In the present paper the linear logistic extension of latent class analysis is described. Thereby it is assumed that the item latent probabilities as well as the class sizes can be attributed to some explanatory variables. The basic equations of the model state the decomposition of the log-odds of the item latent probabilities and of the class sizes into weighted sums of basic parameters representing the effects of the predictor variables. Further, the maximum likelihood equations for these effect parameters and statistical tests for goodness-of-fit are given. Finally, an example illustrates the practical application of the model and the interpretation of the model parameters.",
"Marketing scholars commonly characterize market structure by studying the patterns of substitution implied by brand switching. Though the approach is useful, it typically ignores the destabilizing role of marketing variables (e.g., price) in switching behavior. The authors propose a flexible choice model that partitions the market into consumer segments differing in both brand preference and price sensitivity. The result is a unified description of market structure that links the pattern of brand switching to the magnitudes of own- and cross-price elasticities. The approach is applied in a study of competition between national brands and private labels in one product category.\u0000"
]
} |
1612.06551 | 2566085600 | The term filter bubble has been coined to describe the situation of online users which---due to filtering algorithms---live in a personalised information universe biased towards their own this http URL this paper we use an agent-based simulation framework to measure the actual risk and impact of filter bubble effects occurring in online communities due to content or author based personalisation algorithms. Observing the strength of filter bubble effects allows for opposing the benefits to the risks of this http URL our simulation we observed, that filter bubble effects occur as soon as users indicate preferences towards certain topics.We also saw, that well connected users are affected much stronger than average or poorly connected users. Finally, our experimental setting indicated that the employed personalisation algorithm based on content features seems to bear a lower risk of filter bubble effects than one performing personalisation based on authors. | The importance and the effects of information propagation and perception in social networks and information portals has been analysed in various contexts. Serendipity discovery of non-relevant items in digital libraries @cite_11 shows a general benefit of being inspiring for seeking information and thinking out of the box. The impact on political processes and democracy was discussed in @cite_8 . A psychological study addressed the effects of information presentation and forgetting processes in social networks @cite_4 . | {
"cite_N": [
"@cite_8",
"@cite_4",
"@cite_11"
],
"mid": [
"2087749183",
"2027615088",
"2163630478"
],
"abstract": [
"This paper explores the potentially perverse effects of voter microtargeting, delineating how the very same techniques that empower political candidates to be more efficient and effective in their campaigning may also undermine the political and social fabric of the democracies in which those candidates seek office. The first part of the paper reviews the apparent attraction and stated goals of voter microtargeting and the technical processes upon which it relies. The second part draws out the ethical and political implications of the practice, pointing to some troubling empirical findings. The paper considers how microtargeting contributes to (1) an increased willingness and ability to deliver messages on wedge issues that would be extremely divisive in a more public forum; (2) voter discrimination and de facto disenfranchisement, (3) a chilling of political participation due to perceived violations of voters' privacy, and (4) a general trend toward single issue politics that leads to increased partisanship among voters and ambiguous political mandates for elected representatives. The final part of the paper introduces Soap Box: a project (initiated by the author) to develop a website that will act as a clearinghouse for targeted political advertising. The website aims to make these messages available to all-comers, forcing campaigns to account for and reconcile the different positions they present to different audiences. The paper concludes with a discussion of the limits of an approach that seeks to combat the more worrisome aspects of voter microtageting by exposing the tailored messages to greater public scrutiny.",
"Although a burgeoning literature has shown that practice effects and socially shared retrieval-induced forgetting can reshape the memories of speakers and listeners involved in a conversation, it has generally failed to examine whether such effects can propagate through a sequence of conversational interactions. This lacuna is unfortunate, since sequences of social interactions are more common than single, isolated ones. The present research explores how people exposed to attitudinally biased selective practice propagate the practice and forgetting effects into subsequent conversations with attitudinally similar and dissimilar others and, through these conversations, affect subsequent acts of remembering. The research establishes that the propagation of retrieval-induced forgetting and practice effects is transitive. It also determines when attitude influences propagation. These findings are discussed in the context of the formation of collective memories.",
"While many digital libraries focus on supporting defined tasks that require targeted searching, there is potential for enabling serendipitous discovery that can serve multiple purposes from aiding with the targeted search to suggesting new approaches, methods and ideas. In this research we embedded a tool in a novel interface to suggest other pages to examine in order to assess how that tool might be used while doing focused searching. While only 40 of the participants used the tool, all assessed its usefulness or perceived usefulness. Most participants used it as a source of new terms and concepts to support their current tasks; a few noted the novelty and perceived its potential value in serving as a stimulant."
]
} |
1612.06551 | 2566085600 | The term filter bubble has been coined to describe the situation of online users which---due to filtering algorithms---live in a personalised information universe biased towards their own this http URL this paper we use an agent-based simulation framework to measure the actual risk and impact of filter bubble effects occurring in online communities due to content or author based personalisation algorithms. Observing the strength of filter bubble effects allows for opposing the benefits to the risks of this http URL our simulation we observed, that filter bubble effects occur as soon as users indicate preferences towards certain topics.We also saw, that well connected users are affected much stronger than average or poorly connected users. Finally, our experimental setting indicated that the employed personalisation algorithm based on content features seems to bear a lower risk of filter bubble effects than one performing personalisation based on authors. | Simulation and probabilistic models as tools for the analysis of online communities has been established in recent years. use simulations to support policy modelling and strategic decisions @cite_2 . The dynamics of popularity among news stories was investigated and described by Hogg and Lerman @cite_12 @cite_1 . Self-enforcing content generation mechanisms and their representation as stochastic processes were investigated in @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_1",
"@cite_12",
"@cite_2"
],
"mid": [
"2007590415",
"2042740312",
"2143551222",
"160646531"
],
"abstract": [
"Online peer production systems have enabled people to coactively create, share, classify, and rate content on an unprecedented scale. This paper describes strong macroscopic regularities in how people contribute to peer production systems, and shows how these regularities arise from simple dynamical rules. First, it is demonstrated that the probability a person stops contributing varies inversely with the number of contributions he has made. This rule leads to a power law distribution for the number of contributions per person in which a small number of very active users make most of the contributions. The rule also implies that the power law exponent is proportional to the effort required to contribute, as justified by the data. Second, the level of activity per topic is shown to follow a lognormal distribution generated by a stochastic reinforcement mechanism. A small number of very popular topics thus accumulate the vast majority of contributions. These trends are demonstrated to hold across hundreds of millions of contributions to four disparate peer production systems of differing scope, interface style, and purpose.",
"The popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both the hosts of social media content and its consumers. Accurate and timely prediction would enable hosts to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the content. Predicting the popularity of content in social media is challenging due to the complex interactions between content quality and how the social media site highlights its content. Moreover, most social media sites selectively present content that has been highly rated by similar users, whose similarity is indicated implicitly by their behavior or explicitly by links in a social network. While these factors make it difficult to predict popularity a priori, stochastic models of user behavior on these sites can allow predicting popularity based on early user reactions to new content. By incorporating the various mechanisms through which web sites display content, such models improve on predictions that are based on simply extrapolating from the early votes. Specifically, for one such site, the news aggregator Digg, we show how a stochastic model distinguishes the effect of the increased visibility due to the network from how interested users are in the content. We find a wide range of interest, distinguishing stories primarily of interest to users in the network (“niche interests”) from those of more general interest to the user community. This distinction is useful for predicting a story’s eventual popularity from users’ early reactions to the story.",
"Online social media provide multiple ways to find interesting content. One important method is highlighting content recommended by user’s friends. We examine this process on one such site, the news aggregator Digg. With a stochastic model of user behavior, we distinguish the effects of the content visibility and interestingness to users. We find a wide range of interest and distinguish stories primarily of interest to a users’ friends from those of interest to the entire user community. We show how this model predicts a story’s eventual popularity from users’ early reactions to it, and estimate the prediction reliability. This modeling framework can help evaluate alternative design choices for displaying content on the site.",
"The rapidly increasing popularity of Web 2.0 online communities originates in the ease of collaborative content creation and its sharing. As a result, more community members actively participate in the community and its data growth rates are continuously increasing. This poses the challenge for community platform operators on efficient administration and moderation to ensure the quality of content and to prevent violations of laws (e.g. copyright, privacy, illegal content) and community rules. Involvement of employed administrators who read and approve every piece of user-generated content is clearly the safest way of quality assurance. Since this is a time consuming task it does not scale up with Web 2.0 dimensions. So administrative functions are delegated to members of the community, the moderators. The strategy for choosing trustworthy moderators in big anonymous communities is specified in policies based on user reputation that is measured in bonus points. The proper balancing between community self-management and administration is crucial for the quality, attractiveness, and scalability of the entire community. Therefore understanding the mutual influences between community actors, reputation systems and platform policies that employ user reputation is crucial to ensure the overall quality, user acceptance, and success of the entire online community. Our objective is to predict the behavior in an online community for different policies, which may result in different overall quality of the community content. For this purpose we present our community analysis framework CoSiMo (an acronym for Community Simulation and Modeling), which employs the model-based approach for predicting the impact of policies on community dynamics and health. Through systematic variation of simulated quality assurance mechanisms we show that our model plausibly captures the influence of policies to content quality and can be therefore exploited for optimization of real online communities."
]
} |
1612.06435 | 2952569380 | This work presents a novel descriptor for texture images based on fractal geometry and its application to image analysis. The descriptors are provided by estimating the triangular prism fractal dimension under different scales with a weight exponential parameter, followed by dimensionality reduction using Karhunen-Lo e ve transform. The efficiency of the proposed descriptors is tested on two well-known texture data sets, that is, Brodatz and Vistex, both for classification and image retrieval. The novel method is also tested concerning invariances in situations when the textures are rotated or affected by Gaussian noise. The obtained results outperform other classical and state-of-the-art descriptors in the literature and demonstrate the power of the triangular descriptors in these tasks, suggesting their use in practical applications of image analysis based on texture features. | Texture analysis is a paradigm where the image is described in terms of statistical patterns formed by spatial arrangements of pixel intensities. The first known systematic study on this topic was carried out by Haralick @cite_24 and his co-occurrence matrices. Since then, a large number of methods on texture analysis have been proposed in the literature. Among the most successful approaches one can mention local binary patterns @cite_32 , bag-of-features @cite_5 , scale-invariant feature transform @cite_26 , spatial pyramid matching @cite_0 , invariants of scattering transforms @cite_31 , fast features invariant to rotation and scale of texture @cite_29 , and others. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_5",
"@cite_31"
],
"mid": [
"",
"812027812",
"1516748092",
"2059432853",
"2162915993",
"",
"2167383966"
],
"abstract": [
"",
"A family of novel texture representations called Ffirst, the Fast Features Invariant to Rotation and Scale of Texture, is introduced. New rotation invariants are proposed, extending the LBP-HF features, improving the recognition accuracy. Using the full set of LBP features, as opposed to uniform only, leads to further improvement. Linear Support Vector Machines with an approximate ( ^2 )-kernel map are used for fast and precise classification.",
"The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, biometrics, visual surveillance and video analysis. Computer Vision Using Local Binary Patterns provides a detailed description of the LBP methods and their variants both in spatial and spatiotemporal domains. This comprehensive reference also provides an excellent overview as to how texture methods can be utilized for solving different kinds of computer vision and image analysis problems. Source codes of the basic LBP algorithms, demonstrations, some databases and a comprehensive LBP bibliography can be found from an accompanying web site. Topics include: local binary patterns and their variants in spatial and spatiotemporal domains, texture classification and segmentation, description of interest regions, applications in image retrieval and 3D recognition - Recognition and segmentation of dynamic textures, background subtraction, recognition of actions, face analysis using still images and image sequences, visual speech recognition and LBP in various applications. Written by pioneers of LBP, this book is an essential resource for researchers, professional engineers and graduate students in computer vision, image analysis and pattern recognition. The book will also be of interest to all those who work with specific applications of machine vision.",
"In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.",
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"",
"An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions."
]
} |
1612.06435 | 2952569380 | This work presents a novel descriptor for texture images based on fractal geometry and its application to image analysis. The descriptors are provided by estimating the triangular prism fractal dimension under different scales with a weight exponential parameter, followed by dimensionality reduction using Karhunen-Lo e ve transform. The efficiency of the proposed descriptors is tested on two well-known texture data sets, that is, Brodatz and Vistex, both for classification and image retrieval. The novel method is also tested concerning invariances in situations when the textures are rotated or affected by Gaussian noise. The obtained results outperform other classical and state-of-the-art descriptors in the literature and demonstrate the power of the triangular descriptors in these tasks, suggesting their use in practical applications of image analysis based on texture features. | During the last decades, another branch of methods that have presented interesting results in texture analysis, especially on natural images, are those based on fractal geometry, particularly multifractals @cite_35 , multiscale fractal dimension @cite_34 and fractal descriptors @cite_27 . In this context, this work proposes the study and application of fractal descriptors based on the estimative of the fractal dimension using a tessellation of triangular prisms @cite_13 . | {
"cite_N": [
"@cite_35",
"@cite_27",
"@cite_34",
"@cite_13"
],
"mid": [
"2014801693",
"",
"2169708647",
"1993650430"
],
"abstract": [
"Image texture provides a rich visual description of the surfaces in the scene. Many texture signatures based on various statistical descriptions and various local measurements have been developed. Existing signatures, in general, are not invariant to 3D geometric transformations, which is a serious limitation for many applications. In this paper we introduce a new texture signature, called the multifractal spectrum (MFS). The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. It provides an efficient framework combining global spatial invariance and local robust measurements. Intuitively, the MFS could be viewed as a \"better histogram\" with greater robustness to various environmental changes and the advantage of capturing some geometrical distribution information encoded in the texture. Experiments demonstrate that the MFS codes the essential structure of textures with very low dimension, and thus represents an useful tool for texture classification.",
"",
"This article reports on the application of the recently introduced concept of multiscale fractal dimension (MFD) as a resource for quantifying three-dimensional gene expression patterns in embryonic development. While traditional fractal dimensions provide interesting possibilities for quantifying pattern complexity, as defined by the intensity in which the pattern interacts with its surrounding space, those approaches fail to take into account the important fact that natural objects are not perfectly self-similar. By expressing the fractal behavior explicitly in terms of the spatial scale, the MFD provides a more comprehensive and objective characterization of the complexity of natural data such as gene expression profiles. After presenting the MFD concept as well as a technique for its numerical estimation, the potential of this measure for objectively quantifying gene expression is discussed, and a complete example is provided regarding the three-dimensional expression of the myogenic marker gene Myf5 along successive somites in a mouse embryo. In this specific case, the adopted technique proved itself a useful means for identifying spatial variations of gene expression intensity.",
"Abstract A new method is proposed to calculate the fractional (fractal) dimension of topographic surfaces. The method uses a three-dimensional geometric equivalent of the “walking dividers” method in two dimensions. that is it takes elevation values at the corners of squares, interpolates a center value, divides the square into four triangles, and then computes the surface areas of the tops of the prisms which result from raising the triangles to their given elevations. By repeating this calculation for different size squares, the relationship between the total area of the topographic surface and the spacing of the squares is established, and used to estimate the fractal dimension. A computer program in the C programming language is presented to read digital terrain matrices and compute the fractal dimension of the surface using the method. Some test data sets are provided along with their computed fractal dimensions. Finally, the advantages and disadvantages of the method are discussed."
]
} |
1612.06435 | 2952569380 | This work presents a novel descriptor for texture images based on fractal geometry and its application to image analysis. The descriptors are provided by estimating the triangular prism fractal dimension under different scales with a weight exponential parameter, followed by dimensionality reduction using Karhunen-Lo e ve transform. The efficiency of the proposed descriptors is tested on two well-known texture data sets, that is, Brodatz and Vistex, both for classification and image retrieval. The novel method is also tested concerning invariances in situations when the textures are rotated or affected by Gaussian noise. The obtained results outperform other classical and state-of-the-art descriptors in the literature and demonstrate the power of the triangular descriptors in these tasks, suggesting their use in practical applications of image analysis based on texture features. | Our proposal has some particular characteristics that distinguish it from other approaches in the literature. First, rather than pre-selecting preferable regions in the image as in @cite_26 @cite_0 , here all pixels and scales are equally important , which simplifies the modelling and interpretation of the texture descriptors. Another difference from methods such as those in @cite_26 @cite_31 @cite_29 is that image invariances are not treated explicitly, although the underlying model and multi-scale process ensures that such effect is attenuated in practice. This is confirmed here in the experimental analysis and avoids the use of cumbersome strategies when in many cases invariances are not a critical issue or even when, for example, a rotated texture should be interpreted as a different object. Finally, an important distinction should be done from approaches such as those in @cite_24 @cite_5 @cite_32 where direct relations are established based on te pixel values. Here there is a complete and well-defined physical model behind the statistics extracted from the image, causing it to be more precise in most cases and more robust to deformations usually found in natural structures. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_5",
"@cite_31"
],
"mid": [
"",
"812027812",
"1516748092",
"2162915993",
"2059432853",
"",
"2167383966"
],
"abstract": [
"",
"A family of novel texture representations called Ffirst, the Fast Features Invariant to Rotation and Scale of Texture, is introduced. New rotation invariants are proposed, extending the LBP-HF features, improving the recognition accuracy. Using the full set of LBP features, as opposed to uniform only, leads to further improvement. Linear Support Vector Machines with an approximate ( ^2 )-kernel map are used for fast and precise classification.",
"The recent emergence of Local Binary Patterns (LBP) has led to significant progress in applying texture methods to various computer vision problems and applications. The focus of this research has broadened from 2D textures to 3D textures and spatiotemporal (dynamic) textures. Also, where texture was once utilized for applications such as remote sensing, industrial inspection and biomedical image analysis, the introduction of LBP-based approaches have provided outstanding results in problems relating to face and activity analysis, with future scope for face and facial expression recognition, biometrics, visual surveillance and video analysis. Computer Vision Using Local Binary Patterns provides a detailed description of the LBP methods and their variants both in spatial and spatiotemporal domains. This comprehensive reference also provides an excellent overview as to how texture methods can be utilized for solving different kinds of computer vision and image analysis problems. Source codes of the basic LBP algorithms, demonstrations, some databases and a comprehensive LBP bibliography can be found from an accompanying web site. Topics include: local binary patterns and their variants in spatial and spatiotemporal domains, texture classification and segmentation, description of interest regions, applications in image retrieval and 3D recognition - Recognition and segmentation of dynamic textures, background subtraction, recognition of actions, face analysis using still images and image sequences, visual speech recognition and LBP in various applications. Written by pioneers of LBP, this book is an essential resource for researchers, professional engineers and graduate students in computer vision, image analysis and pattern recognition. The book will also be of interest to all those who work with specific applications of machine vision.",
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.",
"",
"An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions."
]
} |
1612.06370 | 2950341389 | This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce. | Unsupervised learning is a broad area with a large volume of work; Bengio al @cite_52 provide an excellent survey. Here, we briefly revisit some of the recent work in this area. | {
"cite_N": [
"@cite_52"
],
"mid": [
"2163922914"
],
"abstract": [
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."
]
} |
1612.06370 | 2950341389 | This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce. | Classical unsupervised representation learning approaches, such as autoencoders @cite_39 @cite_7 and denoising autoencoders @cite_59 , attempt to learn feature representations from which the original image can be decoded with a low error. An alternative to reconstruction-based objectives is to train generative models of images using generative adversarial networks @cite_18 . These models can be extended to produce good feature representations by training jointly with image encoders @cite_37 @cite_11 . However, to generate realistic images, these models must pay significant attention to low-level details while potentially ignoring higher-level semantics. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_7",
"@cite_39",
"@cite_59",
"@cite_11"
],
"mid": [
"2099471712",
"2412320034",
"",
"2100495367",
"2025768430",
""
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
""
]
} |
1612.06370 | 2950341389 | This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce. | Instead of producing images, several recent studies have focused on providing alternate forms of supervision (often called pretext tasks') that do not require manual labeling and can be algorithmically produced. For instance, Doersch al @cite_28 task a ConvNet with predicting the relative location of two cropped image patches. Noroozi and Favaro @cite_48 extend this by asking a network to arrange shuffled patches cropped from a 3 @math 3 grid. Pathak al @cite_38 train a network to perform an image inpainting task. Other pretext tasks include predicting color channels from luminance @cite_14 @cite_9 or vice versa @cite_45 , and predicting sounds from video frames @cite_57 @cite_50 . The assumption in these works is that to perform these tasks, the network will need to recognize high-level concepts, such as objects, in order to succeed. We compare our approach to all of these pretext tasks and show that the proposed natural task of object segmentation leads to a quantitatively better feature representation in many cases. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_28",
"@cite_48",
"@cite_9",
"@cite_57",
"@cite_45",
"@cite_50"
],
"mid": [
"2963420272",
"2308529009",
"343636949",
"2321533354",
"",
"2511428026",
"2949532563",
""
],
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).",
"",
"The sound of crashing waves, the roar of fast-moving cars – sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visual models. To demonstrate this, we train a convolutional neural network to predict a statistical summary of the sound associated with a video frame. We show that, through this process, the network learns a representation that conveys information about objects and scenes. We evaluate this representation on several recognition tasks, finding that its performance is comparable to that of other state-of-the-art unsupervised learning methods. Finally, we show through visualizations that the network learns units that are selective to objects that are often associated with characteristic sounds.",
"We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.",
""
]
} |
1612.06370 | 2950341389 | This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce. | The human visual system does not receive static images; it receives a continuous video stream. The same idea of defining auxiliary pretext tasks can be used in unsupervised learning from videos too. Wang and Gupta @cite_31 train a ConvNet to distinguish between pairs of tracked patches in a single video, and pairs of patches from different videos. Misra al @cite_23 ask a network to arrange shuffled frames of a video into a temporally correct order. Another such pretext task is to make predictions about the next few frames: Goroshin al @cite_0 predict pixels of future frames and Walker al @cite_8 predict dense future trajectories. However, since nearby frames in a video tend to be visually similar (in color or texture), these approaches might learn low-level image statistics instead of more semantic features. Alternatively, Li al @cite_29 use motion boundary detection to bootstrap a ConvNet-based contour detector, but find that this does not lead to good feature representations. Our intuitions are similar, but our approach produces semantically strong representations. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_0",
"@cite_23",
"@cite_31"
],
"mid": [
"2952390294",
"2300779272",
"2951751411",
"",
"219040644"
],
"abstract": [
"In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.",
"Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.",
"",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Much progress @cite_15 @cite_45 @cite_1 @cite_53 has been made in the DMOPs field in recent years, and most existing algorithms can be classified into the following categories: Increasing Maintaining Diversity methods, Memory based methods, Multi-population based methods, and Prediction based methods. | {
"cite_N": [
"@cite_15",
"@cite_45",
"@cite_1",
"@cite_53"
],
"mid": [
"2137130066",
"52041091",
"2529345806",
"95133698"
],
"abstract": [
"Optimization in dynamic environments is a challenging but important task since many real-world optimization problems are changing over time. Evolutionary computation and swarm intelligence are good tools to address optimization problems in dynamic environments due to their inspiration from natural self-organized systems and biological evolution, which have always been subject to changing environments. Evolutionary optimization in dynamic environments, or evolutionary dynamic optimization (EDO), has attracted a lot of research effort during the last 20 years, and has become one of the most active research areas in the field of evolutionary computation. In this paper we carry out an in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems generators, performance measures, algorithmic approaches, and theoretical studies. The purpose is to for the first time (i) provide detailed explanations of how current approaches work; (ii) review the strengths and weaknesses of each approach; (iii) discuss the current assumptions and coverage of existing EDO research; and (iv) identify current gaps, challenges and opportunities in EDO.",
"Many optimization problems involve multiple objectives, constraints and parameters that change over time. These problems are called dynamic multiobjective optimization problems (DMOPs) and have recently attracted a lot of research. In this chapter, we provide a survey of the state-of-the-art on the field of dynamic multi-objective optimization with regards to the definition and classification of DMOPS, test problems, performance measures and optimization approaches. We provide a comprehensive definition of DMOPs and identify gaps, challenges and future works in dynamic multi-objective optimization.",
"Dynamic Multi-objective Optimization is a challenging research topic since the objective functions, constraints, and problem parameters may change over time. Although dynamic optimization and multi-objective optimization have separately obtained a great interest among many researchers, there are only few studies that have been developed to solve Dynamic Multi-objective Optimisation Problems (DMOPs). Moreover, applying Evolutionary Algorithms (EAs) to solve this category of problems is not yet highly explored although this kind of problems is of significant importance in practice. This paper is devoted to briefly survey EAs that were proposed in the literature to handle DMOPs. In addition, an overview of the most commonly used test functions, performance measures and statistical tests is presented. Actual challenges and future research directions are also discussed.",
"Many real-world systems include time-varying components and, very often, the environment in which they operate is in a constant state of flux. For problems involving such dynamic systems, the fitness landscape changes to reflect the time-varying requirements of the systems. Examples of such problems can be found in the areas of control, scheduling, vehicle routing, and autonomous path planning."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | The increasing diversity methods tend to add variety to the population by using a certain type of methodology when the environment change was detected. For example, Cobb proposed the triggered hypermutation method @cite_34 , and the basic idea of this method is that when change is identified, the mutation rate would be increased immediately, and this would make the converged population divergent again. This approach calls for some improvements, and one of them is that the mutation rate is in a state of uncontrolled change during the whole process, and this ultimately results in reduced performance of the algorithm. Therefore, Vavak @cite_33 presented a mutation operator, called variable local search (VLS), to address the problem. The strategy that the VLS adopted was to gradually increase the mutation rate. Yen @cite_2 proposed a dynamic EA which relocates the individuals based on their change in function value due to the change in the environment and the average sensitivities of their decision variables to the corresponding change in the objective space. This approach can avoid the drawbacks of previous methods to a certain extent. | {
"cite_N": [
"@cite_34",
"@cite_33",
"@cite_2"
],
"mid": [
"326774659",
"1519139529",
"2110988848"
],
"abstract": [
"Abstract : Previous studies of Genetic Algorithm (GA) optimization in nonstationary environments focus on discontinuous, Markovian switching environment. This study introduces the problem of GA optimization in continuous, nonstationary environments where the state of the environment is a function of time. The objective of the GA in such an environment is to select a sequence of values over time that minimize, or maximize, the time-average of the environmental evaluations. In this preliminary study, we explore the use of mutation as a control strategy for having the GA increase or maintain the time- average best-of-generation performance. Given this context, the paper presents a set of short experiments using a simple, unimodal function. Each generation, the domain value mapping into the optimum changes so that the movement follows a sinusoidal path. In one of the experiments, we demonstrate the use of a simple adaptive mutation operator. During periods where the time-averaged best performance of the GA worsens, the GA enters hypermutation (a large increase in mutation); otherwise, the GA maintains a low level of mutation. (kr)",
"In this paper we examine a modification to the genetic algorithm — a new adaptive operator was developed for two industrial applications using genetic algorithm based on-line control systems. The aim is to enable the control systems to track optima of a time-varying dynamic system whilst not being detrimental to its ability to provide sound results for the stationary environments. When compared with the hypermutation operator, the new operator matched the level of diversity introduced into the population with the “degree” of the environmental changes better because it increases population diversity only gradually. Although the new technique was developed for the control application domain where real variables are mostly used, a possible generalization of the method is also suggested. It is believed that the technique has the potential to be a further contribution in making genetic algorithm based techniques more readily usable in industrial control applications.",
"Many real-world optimization problems have to be solved under the presence of uncertainties. A significant number of these uncertainty problems falls into the dynamic optimization category in which the fitness function varies through time. For this class of problems, an evolutionary algorithm is expected to perform satisfactorily in spite of different degrees and frequencies of change in the fitness landscape. In addition, the dynamic evolutionary algorithm should warrant an acceptable performance improvement to justify the additional computational cost. Effective reuse of previous evolutionary information is a must as it facilitates a faster convergence after a change has occurred. This paper proposes a new dynamic evolutionary algorithm that uses variable relocation to adapt already converged or currently evolving individuals to the new environmental condition. The proposed algorithm relocates those individuals based on their change in function value due to the change in the environment and the average sensitivities of their decision variables to the corresponding change in the objective space. The relocation occurs during the transient stage of the evolutionary process, and the algorithm reuses as much information as possible from the previous evolutionary history. As a result, the algorithm shows improved adaptation and convergence. The newly adapted population is shown to be fitter to the new environment than the original or most randomly generated population. The algorithm has been tested by several dynamic benchmark problems and has shown competitive results compared to some chosen state-of-the-art dynamic evolutionary approaches."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Most of the methods in the maintaining diversity category assume that avoiding population convergence can help the algorithm track the changing optimum as soon as possible, and maintain diversity as one of the effective means to that end. Grefenstette @cite_42 proposed a Random Immigrants Genetic Algorithm (RIGA), and the method replaces some individuals in the population randomly. The idea of the RIGA is that introducing new genetic materials into the population can avoid the whole population converging toward a small area in the process of evolution. However, the drawback of the primitive immigrant method was the fitness values of the introduced individuals were usually low, so large amounts are eliminated during the selection stage, and as a result, it is very difficult to introduce different genes into the population. For solving this problem, Yang @cite_55 @cite_21 proposed the hybrid immigrants scheme, memory-based immigrants @cite_50 and elitism-based immigrants @cite_50 , and these methods are effective for dealing with periodically changing DMOPs. However, if the knowledge about the dynamic environment is limited, they would obtain a greatly reduced efficiency. | {
"cite_N": [
"@cite_55",
"@cite_42",
"@cite_21",
"@cite_50"
],
"mid": [
"1979565259",
"1555154718",
"",
"2132340807"
],
"abstract": [
"Dynamic optimization problems are a kind of optimization problems that involve changes over time. They pose a serious challenge to traditional optimization methods as well as conventional genetic algorithms since the goal is no longer to search for the optimal solution(s) of a fixed problem but to track the moving optimum over time. Dynamic optimization problems have attracted a growing interest from the genetic algorithm community in recent years. Several approaches have been developed to enhance the performance of genetic algorithms in dynamic environments. One approach is to maintain the diversity of the population via random immigrants. This paper proposes a hybrid immigrants scheme that combines the concepts of elitism, dualism and random immigrants for genetic algorithms to address dynamic optimization problems. In this hybrid scheme, the best individual, i.e., the elite, from the previous generation and its dual individual are retrieved as the bases to create immigrants via traditional mutation scheme. These elitism-based and dualism-based immigrants together with some random immigrants are substituted into the current population, replacing the worst individuals in the population. These three kinds of immigrants aim to address environmental changes of slight, medium and significant degrees respectively and hence efficiently adapt genetic algorithms to dynamic environments that are subject to different severities of changes. Based on a series of systematically constructed dynamic test problems, experiments are carried out to investigate the performance of genetic algorithms with the hybrid immigrants scheme and traditional random immigrants scheme. Experimental results validate the efficiency of the proposed hybrid immigrants scheme for improving the performance of genetic algorithms in dynamic environments.",
"Genetic algorithms perform an adaptive search by maintaining a population of candidate solutions that are allocated dynamically to promising regions of the search space. The distributed nature of the genetic search provides a natural source of power for searching in changing environments. As long as sufficient diversity remains in the population the genetic algorithm can respond to a changing response surface by reallocating future trials. However, the tendency of genetic algorithms to converge rapidly reduces their ability to identify regions of the search space that might suddenly become more attractive as the environment changes. This paper presents a modification of the standard generational genetic algorithm that is designed to maintain the diversity required to track a changing response surface. An experimental study shows some promise for the new technique.",
"",
"In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Dynamic NSGA-II (DNSGA-II) @cite_11 proposed by Deb also shares a similar idea, and this method handles the DMOPs by introducing diversity when change is detected. There are two versions of the proposed DNSGA-II and they are respectively known as DNSGA-II-A and DNSGA-II-B. In the DNSGA-II-A, the population is replaced by some individuals with new randomly created solutions, while in the DNSGA-II-B, diversity was guarded by replacing a percentage of the population with mutated solutions. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1574490530"
],
"abstract": [
"Most real-world optimization problems involve objectives, constraints, and parameters which constantly change with time. Treating such problems as a stationary optimization problem demand the knowledge of the pattern of change a priori and even then the procedure can be computationally expensive. Although dynamic consideration using evolutionary algorithms has been made for single-objective optimization problems, there has been a lukewarm interest in formulating and solving dynamic multi-objective optimization problems. In this paper, we modify the commonly-used NSGA-II procedure in tracking a new Pareto-optimal front, as soon as there is a change in the problem. Introduction of a few random solutions or a few mutated solutions are investigated in detail. The approaches are tested and compared on a test problem and a real-world optimization of a hydro-thermal power scheduling problem. This systematic study is able to find a minimum frequency of change allowed in a problem for two dynamic EMO procedures to adequately track Pareto-optimal frontiers on-line. Based on these results, this paper also suggests an automatic decision-making procedure for arriving at a dynamic single optimal solution on-line."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | The Multi-population strategy is considered as one efficient solution for the DMOPs, especially for the multiple peaks and the competing peaks problems. Branke @cite_25 proposed the self-organizing scouts method, and this method splits the population into scout and base populations, and the two populations are responsible for exploitation and exploration respectively. In other words, the base population searches for the optimal solution and if the base population finds a peak, then the scout population is generated to track the change of this new peak. Li and Yang @cite_22 employed a multi-population particle swarm optimization (PSO) algorithm to solve multiple peaks problems. In their method, a population uses evolutionary programming, which shows a better global search ability when compared to other EAs, to explore the most hopeful areas in the whole search space, and at the same time, several subpopulations use the fast PSO algorithm to find the local optima. Yang @cite_24 used hierarchical clustering technique to divide the population into different subpopulations, and the main advantage of this design is that the initial individuals of the subpopulations can be generated automatically according to the fitness landscape. | {
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_25"
],
"mid": [
"2143503407",
"2143525780",
"1563447057"
],
"abstract": [
"In the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.",
"In the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems.",
"Time-dependent optimization problems pose a new challenge to evolutionary algorithms, since they not only require a search for the optimum, but also a continuous tracking of the optimum over time. In this paper, we will will use concepts from the ”forking GA” (a multi-population evolutionary algorithm proposed to find multiple peaks in a multi-modal landscape) to enhance search in a dynamic landscape. The algorithm uses a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks. We will show that this approach is indeed suitable for dynamic optimization problems by testing it on the recently proposed Moving Peaks Benchmark."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Bosman @cite_41 believed that the decision made at one point would affect the optima obtained in the future, so for the dynamic optimization problems, he proposed an algorithmical framework which integrated machine learning, statistic learning, and evolutionary computation, and this framework can effectively predict what the state of environment is going to be. In @cite_9 , the authors suggested that the state of an optimum should contain the location and the speed information, so the Kalman filter technique can be used to estimate the state of the system and the error. The authors proposed an EA to measure the state of the past optimum and then use the Kalman filter to obtain an estimated value of the optimum in the next time instance. | {
"cite_N": [
"@cite_41",
"@cite_9"
],
"mid": [
"1696786035",
"2044757078"
],
"abstract": [
"htmlabstractIn this chapter we focus on the importance of the use of learning and anticipation in (online) dynamic optimization. To this end we point out an important source of problem-difficulty that has so far received significantly less attention than the traditional shifting of optima. Intuitively put, decisions taken now (i.e. setting the problem variables to certain values) may influence the score that can be obtained in the future. We indicate how such time-linkage can deceive an optimizer and cause it to find a suboptimal solution trajectory. We then propose a means to address time-linkage: predict the future (i.e. anticipation) by learning from the past. We formalize this means in an algorithmic framework and indicate why evolutionary algorithms (EAs) are specifically of interest in this framework. We have performed experiments with two benchmark problems that feature time-linkage. The results show, as a proof of principle, that in the presence of time-linkage EAs based on this framework can obtain better results than classic EAs that do not predict the future.",
"The dynamic optimization problem concerns finding an optimum in a changing environment. In the field of evolutionary algorithms, this implies dealing with a time-changing fitness landscape. In this paper we compare different techniques for integrating motion information into an evolutionary algorithm, in the case it has to follow a time-changing optimum, under the assumption that the changes follow a nonrandom law. Such a law can be estimated in order to improve the optimum tracking capabilities of the algorithm. In particular, we will focus on first order dynamical laws to track moving objects. A vision-based tracking robotic application is used as testbed for experimental comparison."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Stroud @cite_23 proposed the Kalman-extended Genetic Algorithm (KGA), and the basic idea of the KGA was that two types of uncertainties surrounded the estimated value of an individual in a dynamical environment. The first type of uncertainty is produced by the dynamic of the environment while the second type was related to the evaluation of individuals. For the different situations, the KGA has two different ways to update the covariances, and uses the Kalman filter technique to predict the two uncertainties which allows the algorithm to work well in a dynamic environment. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2148192607"
],
"abstract": [
"In basic genetic algorithm (GA) applications, the fitness of a solution takes a value that is certain and unchanging. This formulation does not work for ongoing searches for better solutions in a nonstationary environment in which expected solution fitness changes with time in unpredictable ways, or for fitness evaluations corrupted by noise. In such cases, the estimated fitness has an associated uncertainty. The uncertainties due to environmental changes (process noise) and to noisy evaluations (observation noise) can be reduced, at least temporarily, by re-evaluating existing solutions. The Kalman formulation provides a formal mechanism for treating uncertainty in GA. It provides the mechanics for determining the estimated fitness and uncertainty when a new solution is generated and evaluated for the first time. It also provides the mechanics for updating the estimated fitness and uncertainty after an existing solution is re-evaluated and for increasing the uncertainty with the passage of time. A Kalman-extended GA (KGA) is developed to determine when to generate a new individual, and when to re-evaluate an existing one and which to re-evaluate. This KGA is applied to the problem of maintaining a network configuration with minimized message loss, with mobile nodes and stochastic transmission. As the nodes move, the optimal network changes, but information contained within the population of solutions allows efficient discovery of better-adapted solutions. The sensitivity of the KGA to several control parameters is explored."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | In @cite_30 , Zhou presented an algorithm, called Population Prediction Strategy (PPS), to predict a whole population instead of predicting some isolated points. There are two key concepts here: center point and manifold. Whenever a change is detected, the algorithm uses a sequence of center points obtained from the search progress to predict the next center point, and at the same time, the previous manifolds are used to estimate the next manifold. The main problem of this method is that, it is difficult to obtain historical information at the beginning stage, and this may lead to poor convergence. | {
"cite_N": [
"@cite_30"
],
"mid": [
"1968173975"
],
"abstract": [
"This paper investigates how to use prediction strategies to improve the performance of multiobjective evolutionary optimization algorithms in dealing with dynamic environments. Prediction-based methods have been applied to predict some isolated points in both dynamic single objective optimization and dynamic multiobjective optimization. We extend this idea to predict a whole population by considering the properties of continuous dynamic multiobjective optimization problems. In our approach, called population prediction strategy (PPS), a Pareto set is divided into two parts: a center point and a manifold. A sequence of center points is maintained to predict the next center, and the previous manifolds are used to estimate the next manifold. Thus, PPS could initialize a whole population by combining the predicted center and estimated manifold when a change is detected. We systematically compare PPS with a random initialization strategy and a hybrid initialization strategy on a variety of test instances with linear or nonlinear correlation between design variables. The statistical results show that PPS is promising for dealing with dynamic environments."
]
} |
1612.06093 | 2583496274 | One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal front becomes a challenge. One of the promising solutions is reusing “experiences” to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs. | Recently, there are some works exploiting knowledge reuse techniques or machine learning in evolutionary computation that have been proposed. In @cite_44 , the authors propose an approach based on transfer learning and genetic programming to solve complex image classification problems. The basic idea of the proposed algorithm is that the knowledge learned from a simpler subtask is used to solve a more complex subtask, and reusing knowledge blocks are discovered from similar as well as different image classification tasks during the evolutionary process. In @cite_3 , the authors present a genetic programming-like representation to identify building blocks of knowledge in a learning classifier system, and the proposed method can extract useful building blocks from simpler and smaller problems and reuse them to learn more complex multiplexer problem. In @cite_56 , the authors present an evolutionary memetic computing paradigm that is capable of learning and evolving knowledge meme that traverses two different but related problem domains, capacitated vehicle routing problem and capacitated arc routing problem, for greater search efficiency. Experimental results show that evolutionary optimization can benefit from this approach.” | {
"cite_N": [
"@cite_44",
"@cite_3",
"@cite_56"
],
"mid": [
"2581419616",
"2023131636",
"2031801005"
],
"abstract": [
"Genetic programming (GP) is a well-known evolutionary computation technique, which has been successfully used to solve various problems, such as optimization, image analysis, and classification. Transfer learning is a type of machine learning approach that can be used to solve complex tasks. Transfer learning has been introduced to GP to solve complex Boolean and symbolic regression problems with some promise. However, the use of transfer learning with GP has not been investigated to address complex image classification tasks with noise and rotations, where GP cannot achieve satisfactory performance, but GP with transfer learning may improve the performance. In this paper, we propose a novel approach based on transfer learning and GP to solve complex image classification problems by extracting and reusing blocks of knowledge information, which are automatically discovered from similar as well as different image classification tasks during the evolutionary process. The proposed approach is evaluated on three texture data sets and three office data sets of image classification benchmarks, and achieves better classification performance than the state-of-the-art image classification algorithm. Further analysis on the evolved solutions trees shows that the proposed approach with transfer learning can successfully discover and reuse knowledge information extracted from similar or different problems to improve its performance on complex image classification problems.",
"Evolutionary computation techniques have had limited capabilities in solving large-scale problems due to the large search space demanding large memory and much longer training times. In the work presented here, a genetic programming like rich encoding scheme has been constructed to identify building blocks of knowledge in a learning classifier system. The fitter building blocks from the learning system trained against smaller problems have been utilized in a higher complexity problem in the domain to achieve scalable learning. The proposed system has been examined and evaluated on four different Boolean problem domains: 1) multiplexer, 2) majority-on, 3) carry, and 4) even-parity problems. The major contribution of this paper is to successfully extract useful building blocks from smaller problems and reuse them to learn more complex large-scale problems in the domain, e.g., 135-bit multiplexer problem, where the number of possible instances is 2 135 ≈ 4 × 10 40 , is solved by reusing the extracted knowledge from the learned lower level solutions in the domain. Autonomous scaling is, for the first time, shown to be possible in learning classifier systems. It improves effectiveness and reduces the number of training instances required in large problems, but requires more time due to its sequential build-up of knowledge.",
"In recent decades, a plethora of dedicated evolutionary algorithms (EAs) have been crafted to solve domain-specific complex problems more efficiently. Many advanced EAs have relied on the incorporation of domain-specific knowledge as inductive biases that is deemed to fit the problem of interest well. As such, the embedment of domain knowledge about the underlying problem within the search algorithms is becoming an established mode of enhancing evolutionary search performance. In this paper, we present a study on evolutionary memetic computing paradigm that is capable of learning and evolving knowledge meme that traverses different but related problem domains, for greater search efficiency. Focusing on combinatorial optimization as the area of study, a realization of the proposed approach is investigated on two NP-hard problem domains (i.e., capacitated vehicle routing problem and capacitated arc routing problem). Empirical studies on well-established routing problems and their respective state-of-the-art optimization solvers are presented to study the potential benefits of leveraging knowledge memes that are learned from different but related problem domains on future evolutionary search."
]
} |
1612.06114 | 2949442441 | We present a novel open-source framework for visualizing electromagnetic articulography (EMA) data in real-time, with a modular framework and anatomically accurate tongue and palate models derived by multilinear subspace learning. | Badin al's Audiovisual Talking Head (ATH) models speech articulators based on mri data and video images from one speaker @cite_4 . More recent work has focused on animating the ATH using ultrasound data, though to our knowledge this was an offline method @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2295742453",
"1607802755"
],
"abstract": [
"This paper presents a method for automatically animating the articulatory tongue model of a reference speaker from ultrasound images of the tongue of another speaker. This work is developed in the context of speech therapy based on visual biofeedback, where a speaker is provided with visual information about his her own articulation. In our approach, the feedback is delivered via an articulatory talking head, which displays the tongue during speech production using augmented reality (e.g. transparent skin). The user’s tongue movements are captured using ultrasound imaging and parameterized using the PCA-based EigenTongue technique. Extracted features are then converted into control parameters of the articulatory tongue model using Gaussian Mixture Regression. This procedure was evaluated by decoding the converted tongue movements at the phonetic level using an HMM-based decoder trained on the reference speaker's articulatory data. Decoding errors were then manually reassessed in order to take into account possible phonetic idiosyncrasies (i.e. speaker phoneme specific articulatory strategies). With a system trained on a limited set of 88 VCV sequences, the recognition accuracy at the phonetic level was found to be approximately 70 . Index Terms: articulatory tongue model, articulatory talking head, ultrasound imaging, GMM, speech therapy",
"We present a methodology developed to derive three-dimensional models of speech articulators from volume MRI and multiple view video images acquired on one speaker. Linear component analysis is used to model these highly deformable articulators as the weighted sum of a small number of basic shapes corresponding to the articulators' degrees of freedom for speech. These models are assembled into an audiovisual talking head that can produce augmented audiovisual speech, i.e. can display usually non visible articulators such as tongue or velum. The talking head is then animated by recovering its control parameters by inversion from the coordinates of a small number of points of the articulators of the same speaker tracked by Electro-Magnetic Articulography. The augmented speech produced points the way to promising applications in the domain of speech therapy for speech retarded children, perception and production rehabilitation of hearing impaired children, and pronunciation training for second language learners."
]
} |
1612.06305 | 2897922489 | Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER). | In contrast, online signature verification methods take the dynamic writing process into account @cite_21 . A signature is represented by a pen tip trajectory measurement that captures the position of the pen over time; depending on the digitizer, this may be accompanied by additional measurements of the pressure and pen inclination. In this case, the signature is represented as a sequence @math ; @math , where @math is the signal sampled at time @math and @math is the sampling interval @cite_10 . Clearly, the additional time dimension captured by online methods provides valuable information about the signature, leading to a higher level of verification performance in general @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_10"
],
"mid": [
"1916685473",
"2142069714",
"956584773"
],
"abstract": [
"Discrete Cosine Transform. Definitions and General Properties. DCT and Its Relations to the Karhunen-Loeve Transform. Fast Algorithms for DCT-II. Two Dimensional DCT Algorithms. Performance of the DCT. Applications of the DCT. Appendices. References. Index.",
"Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered.",
"The area of Handwritten Signature Verification has been broadly researched in the last decades, but remains an open research problem. The objective of signature verification systems is to discriminate if a given signature is genuine (produced by the claimed individual), or a forgery (produced by an impostor). This has demonstrated to be a challenging task, in particular in the offline (static) scenario, that uses images of scanned signatures, where the dynamic information about the signing process is not available. Many advancements have been proposed in the literature in the last 5-10 years, most notably the application of Deep Learning methods to learn feature representations from signature images. In this paper, we present how the problem has been handled in the past few decades, analyze the recent advancements in the field, and the potential directions for future research."
]
} |
1612.06305 | 2897922489 | Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER). | Two approaches to online signature verification can be further distinguished. Feature-based methods represent signatures with feature vectors while function-based methods take the complete time sequence into account @cite_0 . The former provide a data security advantage because the original signatures do not have to be stored in a database; however, the latter tend to achieve better verification performance. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2062219129"
],
"abstract": [
"Abstract This paper presents a survey of the literature on automatic signature verification and writer identification by computer, and an overview of achievements in static and dynamic approaches to solving these problems, with a special focus on preprocessing techniques, feature extraction methods, comparison processes and performance evaluation. In addition, for each type of approache special attention is given to requirement analysis, human factors, practical application environments, and appropriate definitions and terminology. Throughout the paper, new research directions are suggested."
]
} |
1612.06305 | 2897922489 | Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER). | Wrist-worn devices, such as smartwatches and fitness trackers, have become perhaps the most popular category of wearable devices, and many major manufacturers, including Samsung and Apple, have released their devices recently. Since these devices are worn on the wrist, they introduce a unique opportunity to both detect and understand a user's arm, hand and finger movements as shown in @cite_17 . In comparison, forearm devices such as the one presented in @cite_13 , are very limited in detecting fine gestures such as finger gestures or writing. Similarly, finger worn devices such as the one introduced in @cite_20 , can be used to understand users' finger gestures and writing. However, this is limited to the gestures of a specific finger, and gestures using other fingers can not be identified. Wrist-worn devices are less limited as they facilitate gesture recognition based on the arm, the hand and all of the fingers. | {
"cite_N": [
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"1965337002",
"1986707552"
],
"abstract": [
"",
"This paper presents a remote gesture input solution for interacting indirectly with user interfaces on mobile and wearable devices. The proposed solution uses a wearable ring platform worn on users index finger. The ring detects and interprets various gestures performed on any available surface, and wirelessly transmits the gestures to the remote device. The ring opportunistically harvests energy from an NFC-enabled phone for perpetual operation without explicit charging. We use a finger-tendon pressure-based solution to detect touch, and a light-weight audio based solution for detecting finger motion on a surface. The two level energy efficient classification algorithms identify 23 unique gestures that include tapping, swipes, scrolling, and strokes for hand written text entry. The classification algorithms have an average accuracy of 73 with no explicit user training. Our implementation supports 10 hours of interactions on a surface at 2 Hz gesture frequency. The prototype was built with off-the-shelf components has a size similar to a large ring.",
"Smartwatch is becoming one of the most popular wearable device with many major smartphone manufacturers such as Samsung and Apple releasing their smartwatches recently. Apart from the fitness applications, the smartwatch provides a rich user interface that has enabled many applications like instant messaging and email. Since the smartwatch is worn on the wrist, it introduces a unique opportunity to understand user's arm, hand and possibly finger movements using its accelerometer and gyroscope sensors. Although user's arm and hand gestures are likely to be identified with ease using the smartwatch sensors, it is not clear how much of user's finger gestures can be recognized. In this paper, we show that motion energy measured at the smartwatch is sufficient to uniquely identify user's hand and finger gestures. We identify essential features of accelerometer and gyroscope data that reflect the movements of tendons (passing through the wrist) when performing a finger or a hand gesture. With these features, we build a classifier that can uniquely identify 37 (13 finger, 14 hand and 10 arm) gestures with an accuracy of 98 . We further extend our gesture recognition to identify the characters written by the user with her index finger on a surface, and show that such finger-writing can also be accurately recognized with nearly 95 accuracy. Our presented results will enable many novel applications like remote control and finger-writing-based input to devices using smartwatch."
]
} |
1612.06305 | 2897922489 | Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER). | While there has been a lot of research in the field of user authentication using smartphone devices, there have been only a few works that aimed to authenticate users using wearable devices. For example, a recent study showed that it is possible to distinguish between users who use the same objects (e.g., a light switch, a refrigerator, etc.) @cite_16 using continuous authentication. Another research @cite_24 aimed at authenticating users from a short recording of their natural walk as captured by their smartwatch. A recent patent @cite_19 filed by Samsung suggests a novel method by which the veins of a smartwatch user are used to authenticate his her identity. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_16"
],
"mid": [
"2295043364",
"",
"2015306494"
],
"abstract": [
"In this paper, we propose four continuous authentication designs by using the characteristics of arm movements while individuals walk. The first design uses acceleration of arms captured by a smartwatch's accelerometer sensor, the second design uses the rotation of arms captured by a smartwatch's gyroscope sensor, third uses the fusion of both acceleration and rotation at the feature-level and fourth uses the fusion at score-level. Each of these designs is implemented by using four classifiers, namely, k nearest neighbors (k-NN) with Euclidean distance, Logistic Regression, Multilayer Perceptrons, and Random Forest resulting in a total of sixteen authentication mechanisms. These authentication mechanisms are tested under three different environments, namely an intra-session, inter-session on a dataset of 40 users and an inter-phase on a dataset of 12 users. The sessions of data collection were separated by at least ten minutes, whereas the phases of data collection were separated by at least three months. Under the intra-session environment, all of the twelve authentication mechanisms achieve a mean dynamic false accept rate (DFAR) of 0 and dynamic false reject rate (DFRR) of 0 . For the inter-session environment, feature level fusion-based design with classifier k-NN achieves the best error rates that are a mean DFAR of 2.2 and DFRR of 4.2 . The DFAR and DFRR increased from 5.68 and 4.23 to 15.03 and 14.62 respectively when feature level fusion-based design with classifier k-NN was tested under the inter-phase environment on a dataset of 12 users.",
"",
"In order for objects to perform personalized or contextual functions based on identity, they must solve what we call the object user identification problem: understanding who is actually using them. In this paper, we propose a new technique that uses data from wearable wrist sensors to perform object user identification. We hypothesize that objects have unique hallmarks that are imprinted in the hand gestures of its users. By detecting the presence of an object's hallmark in the wrist sensor data, we can identify who used the object. We evaluate this concept with a smart home application: recognizing who is using an object or appliance in a multi-person home by combining smart meter data and wearables. We conduct three different studies with 10 participants: 1) a study with scripted object use 2) a study with high-level tasked activities and unscripted object use, and 3) a 5-day in-situ study. These studies indicate that our approach performs object user identification with an average accuracy of 85--90 ."
]
} |
1612.06305 | 2897922489 | Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER). | Closer to the field of handwriting analysis, several recent studies have tried to use motion data collected from wearable devices to recognize different writing gestures such as inferring the letter written. For example, the authors of @cite_23 investigated the task of writing in the air with a mobile device. In @cite_3 , researchers suggested a platform for recognizing text written on a whiteboard using a smartwatch. In @cite_17 , researchers tried to infer letters written on a sheet of paper. With a totally different purpose in mind, the authors of @cite_12 , tried to detect the letters typed on a keyboard using a smartwatch. Similarly, in @cite_2 , researchers presented a new attack method that allows attackers to extract sensitive information such as credit card or phone access PIN codes from motion sensors in wearable devices. | {
"cite_N": [
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_12",
"@cite_17"
],
"mid": [
"2339997970",
"2080116634",
"2220702836",
"2082629283",
"1986707552"
],
"abstract": [
"In this paper, we show how motion data collected with a smartwatch can be used to infer text written on a whiteboard. All data is gathered by a commercial, off the shelf smartwatch, without any additional hardware. We obtain single letters recognition rates of 94 in the first guess. Using three guesses, we recognize 99 of the letters. This result is achieved by transforming raw sensor measurements into a coordinate system reflecting the orientation of the whiteboard. Therefore, training data recorded on one whiteboard can be used on another even if the user is writing at different heights or body postures. We show that the built in microphone picks up the sounds caused by the pen which can help to segment the input into individual letters.",
"Numerous sensors in modern mobile phones enable a range of people-centric applications. This paper envisions a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or draw simple diagrams in the air. The acceleration due to hand gestures can be translated into geometric strokes, and recognized as characters. We prototype the PhonePoint Pen on the Nokia N95 platform, and evaluate it through real users. Results show that English characters can be identified with an average accuracy of 91.9 , if the users conform to a few reasonable constraints. Future work is focused on refining the prototype, with the goal of offering a new user-experience that complements keyboards and touch-screens.",
"Wearable technologies are today on the rise, becoming more common and broadly available to mainstream users. In fact, wristband and armband devices such as smartwatches and fitness trackers already took an important place in the consumer electronics market and are becoming ubiquitous. By their very nature of being wearable, these devices, however, provide a new pervasive attack surface threatening users privacy, among others. In the meantime, advances in machine learning are providing unprecedented possibilities to process complex data efficiently. Allowing patterns to emerge from high dimensional unavoidably noisy data. The goal of this work is to raise awareness about the potential risks related to motion sensors built-in wearable devices and to demonstrate abuse opportunities leveraged by advanced neural network architectures. The LSTM-based implementation presented in this research can perform touchlogging and keylogging on 12-keys keypads with above-average accuracy even when confronted with raw unprocessed data. Thus demonstrating that deep neural networks are capable of making keystroke inference attacks based on motion sensors easier to achieve by removing the need for non-trivial pre-processing pipelines and carefully engineered feature extraction strategies. Our results suggest that the complete technological ecosystem of a user can be compromised when a wearable wristband device is worn.",
"Imagine a user typing on a laptop keyboard while wearing a smart watch. This paper asks whether motion sensors from the watch can leak information about what the user is typing. While its not surprising that some information will be leaked, the question is how much? We find that when motion signal processing is combined with patterns in English language, the leakage is substantial. Reported results show that when a user types a word @math , it is possible to shortlist a median of 24 words, such that @math is in this shortlist. When the word is longer than @math characters, the median shortlist drops to @math . Of course, such leaks happen without requiring any training from the user, and also under the (obvious) condition that the watch is only on the left hand. We believe this is surprising and merits awareness, especially in light of various continuous sensing apps that are emerging in the app market. Moreover, we discover additional \"leaks\" that can further reduce the shortlist -- we leave these exploitations to future work.",
"Smartwatch is becoming one of the most popular wearable device with many major smartphone manufacturers such as Samsung and Apple releasing their smartwatches recently. Apart from the fitness applications, the smartwatch provides a rich user interface that has enabled many applications like instant messaging and email. Since the smartwatch is worn on the wrist, it introduces a unique opportunity to understand user's arm, hand and possibly finger movements using its accelerometer and gyroscope sensors. Although user's arm and hand gestures are likely to be identified with ease using the smartwatch sensors, it is not clear how much of user's finger gestures can be recognized. In this paper, we show that motion energy measured at the smartwatch is sufficient to uniquely identify user's hand and finger gestures. We identify essential features of accelerometer and gyroscope data that reflect the movements of tendons (passing through the wrist) when performing a finger or a hand gesture. With these features, we build a classifier that can uniquely identify 37 (13 finger, 14 hand and 10 arm) gestures with an accuracy of 98 . We further extend our gesture recognition to identify the characters written by the user with her index finger on a surface, and show that such finger-writing can also be accurately recognized with nearly 95 accuracy. Our presented results will enable many novel applications like remote control and finger-writing-based input to devices using smartwatch."
]
} |
1612.05836 | 2585210858 | Mirror neurons have been observed in the primary motor cortex of primate species, in particular in humans and monkeys. A mirror neuron fires when a person performs a certain action, and also when he observes the same action being performed by another person. A crucial step towards building fully autonomous intelligent systems with human-like learning abilities is the capability in modeling the mirror neuron. On one hand, the abundance of egocentric cameras in the past few years has offered the opportunity to study a lot of vision problems from the first-person perspective. A great deal of interesting research has been done during the past few years, trying to explore various computer vision tasks from the perspective of the self. On the other hand, videos recorded by traditional static cameras, capture humans performing different actions from an exocentric third-person perspective. In this work, we take the first step towards relating motion information across these two perspectives. We train models that predict motion in an egocentric view, by observing it from an exocentric view, and vice versa. This allows models to predict how an egocentric motion would look like from outside. To do so, we train linear and nonlinear models and evaluate their performance in terms of retrieving the egocentric (exocentric) motion features, while having access to an exocentric (egocentric) motion feature. Our experimental results demonstrate that motion information can be successfully transferred across the two views. | First person vision, also known as egocentric vision, has became increasingly popular in the vision community. A lot of research has been conducted in the past few years @cite_9 @cite_8 , including object detection @cite_3 , activity recognition @cite_13 @cite_18 and video summarization @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_13"
],
"mid": [
"2149276562",
"2120645068",
"",
"",
"2031688197",
"2212494831"
],
"abstract": [
"We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.",
"We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"",
"",
"This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.",
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze."
]
} |
1612.05836 | 2585210858 | Mirror neurons have been observed in the primary motor cortex of primate species, in particular in humans and monkeys. A mirror neuron fires when a person performs a certain action, and also when he observes the same action being performed by another person. A crucial step towards building fully autonomous intelligent systems with human-like learning abilities is the capability in modeling the mirror neuron. On one hand, the abundance of egocentric cameras in the past few years has offered the opportunity to study a lot of vision problems from the first-person perspective. A great deal of interesting research has been done during the past few years, trying to explore various computer vision tasks from the perspective of the self. On the other hand, videos recorded by traditional static cameras, capture humans performing different actions from an exocentric third-person perspective. In this work, we take the first step towards relating motion information across these two perspectives. We train models that predict motion in an egocentric view, by observing it from an exocentric view, and vice versa. This allows models to predict how an egocentric motion would look like from outside. To do so, we train linear and nonlinear models and evaluate their performance in terms of retrieving the egocentric (exocentric) motion features, while having access to an exocentric (egocentric) motion feature. Our experimental results demonstrate that motion information can be successfully transferred across the two views. | Motion in egocentric vision, in particular, has been studied as one of the fundamental features of first person video analysis. @cite_17 explore the use of convolutional neural networks (CNNs) to learn the best visual features and predict the camera motion in egocentric videos. Su and Grauman @cite_1 propose a learning-based approach to detect user engagement by using long-term egomotion cues. @cite_2 learn the feature mapping from pixels in a video frame to a space that is equivariant to various motion classes. @cite_6 have proposed a twin stream network architecture to analyze the appearance information and the motion information from egocentric videos and have used these features to recognize egocentric activities. | {
"cite_N": [
"@cite_2",
"@cite_1",
"@cite_6",
"@cite_17"
],
"mid": [
"1947727767",
"2338803139",
"1990930131",
"2220063164"
],
"abstract": [
"Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"In a wearable camera video, we see what the camera wearer sees. While this makes it easy to know roughly Open image in new window , it does not immediately reveal Open image in new window . Specifically, at what moments did his focus linger, as he paused to gather more information about something he saw? Knowing this answer would benefit various applications in video summarization and augmented reality, yet prior work focuses solely on the “what” question (estimating saliency, gaze) without considering the “when” (engagement). We propose a learning-based approach that uses long-term egomotion cues to detect engagement, specifically in browsing scenarios where one frequently takes in new visual information (e.g., shopping, touring). We introduce a large, richly annotated dataset for ego-engagement that is the first of its kind. Our approach outperforms a wide array of existing methods. We show engagement can be detected well independent of both scene appearance and the camera wearer’s identity.",
"For understanding the behavior, intent, and environment of a person, the surveillance metaphor is traditional; that is, install cameras and observe the subject, and his her interaction with other people and the environment. Instead, we argue that the first-person vision (FPV), which senses the environment and the subject's activities from a wearable sensor, is more advantageous with images about the subject's environment as taken from his her view points, and with readily available information about head motion and gaze through eye tracking. In this paper, we review key research challenges that need to be addressed to develop such FPV systems, and describe our ongoing work to address them using examples from our prototype systems.",
"Visual ego-motion estimation, or briefly visual odometry (VO), is one of the key building blocks of modern SLAM systems. In the last decade, impressive results have been demonstrated in the context of visual navigation, reaching very high localization performance. However, all ego-motion estimation systems require careful parameter tuning procedures for the specific environment they have to work in. Furthermore, even in ideal scenarios, most state-of-the-art approaches fail to handle image anomalies and imperfections, which results in less robust estimates. VO systems that rely on geometrical approaches extract sparse or dense features and match them to perform frame-to-frame (F2F) motion estimation. However, images contain much more information that can be used to further improve the F2F estimation. To learn new feature representation, a very successful approach is to use deep convolutional neural networks. Inspired by recent advances in deep networks and by previous work on learning methods applied to VO, we explore the use of convolutional neural networks to learn both the best visual features and the best estimator for the task of visual ego-motion estimation. With experiments on publicly available datasets, we show that our approach is robust with respect to blur, luminance, and contrast anomalies and outperforms most state-of-the-art approaches even in nominal conditions."
]
} |
1612.06141 | 2581863816 | Domain adaptation is a key feature in Machine Translation. It generally encompasses terminology, domain and style adaptation, especially for human post-editing workflows in Computer Assisted Translation (CAT). With Neural Machine Translation (NMT), we introduce a new notion of domain adaptation that we call "specialization" and which is showing promising results both in the learning speed and in adaptation accuracy. In this paper, we propose to explore this approach under several perspectives. | Such pre-processing approaches like data selection introduced by @cite_4 and improved by @cite_11 and many others @cite_12 @cite_1 are effective and their impact studied @cite_15 @cite_0 @cite_13 . But, the main draw back of these approaches is they need a full retrain to be effective. | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2136477195",
"1905522558",
"2062346908",
"2136156618",
"",
"2117278770",
"2169418696"
],
"abstract": [
"Parallel corpus is an indispensable resource for translation model training in statistical machine translation (SMT). Instead of collecting more and more parallel training corpora, this paper aims to improve SMT performance by exploiting full potential of the existing parallel corpora. Two kinds of methods are proposed: offline data optimization and online model optimization. The offline method adapts the training data by redistributing the weight of each training sentence pairs. The online method adapts the translation model by redistributing the weight of each predefined submodels. Information retrieval model is used for the weighting scheme in both methods. Experimental results show that without using any additional resource, both methods can improve SMT performance significantly.",
"We explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large general-domain parallel corpus that are most relevant to the target domain. These sentences may be selected with simple cross-entropy based methods, of which we present three. As these sentences are not themselves identical to the in-domain data, we call them pseudo in-domain subcorpora. These subcorpora -- 1 the size of the original -- can then used to train small domain-adapted Statistical Machine Translation (SMT) systems which outperform systems trained on the entire corpus. Performance is further improved when we use these domain-adapted models in combination with a true in-domain model. The results show that more training data is not always better, and that best results are attained via proper domain-relevant data selection, as well as combining in- and general-domain systems during decoding.",
"The effective integration of MT technology into computer-assisted translation tools is a challenging topic both for academic research and the translation industry. In particular, professional translators consider the ability of MT systems to adapt to the feedback provided by them to be crucial. In this paper, we propose an adaptation scheme to tune a statistical MT system to a translation project using small amounts of post-edited texts, like those generated by a single user in even just one day of work. The same scheme can be applied on a larger scale in order to focus general purpose models towards the specific domain of interest. We assess our method on two domains, namely information technology and legal, and four translation directions, from English to French, Italian, Spanish and German. The main outcome is that our adaptation strategy can be very effective provided that the seed data used for adaptation is close enough' to the remaining text to be translated; otherwise, MT quality neither improves nor worsens, thus showing the robustness of our method.",
"Most of the freely available parallel data to train the translation model of a statistical machine translation system comes from very specific sources (European parliament, United Nations, etc). Therefore, there is increasing interest in methods to perform an adaptation of the translation model. A popular approach is based on unsupervised training, also called self-enhancing. Both only use monolingual data to adapt the translation model. In this paper we extend the previous work and provide new insight in the existing methods. We report results on the translation between French and English. Improvements of up to 0.5 BLEU were observed with respect to a very competitive baseline trained on more than 280M words of human translated parallel data.",
"",
"We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.",
"Reducing language model (LM) size is a critical issue when applying a LM to realistic applications which have memory constraints. In this paper, three measures are studied for the purpose of LM pruning. They are probability, rank, and entropy. We evaluated the performance of the three pruning criteria in a real application of Chinese text input in terms of character error rate (CER). We first present an empirical comparison, showing that rank performs the best in most cases. We also show that the high-performance of rank lies in its strong correlation with error rate. We then present a novel method of combining two criteria in model pruning. Experimental results show that the combined criterion consistently leads to smaller models than the models pruned using either of the criteria separately, at the same CER."
]
} |
1612.06209 | 2581757756 | Personal devices (e.g. laptops, tablets, and mobile phones) are conventional in daily life and have the ability to store users' private data. The security problems related to these appliances have become a primary concern for both users and researchers. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest a blurriness-based method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication challenges in the presence of a random and an informed attacker who is familiar with the environment and observed that the time required and the number of attempts are significantly higher than for the legitimate user, making it possible to detect attacks on the authentication system. | Pattern-based passwords on touch screens have been comfortably adopted for a long time but they can be compromised, for example, by a camera-based smudge attack @cite_39 . Thus, several user authentication approaches have been proposed as alternative solutions. Most of them exploited users' behaviorial data recorded by wearable sensors such as accelerometer @cite_34 @cite_2 or camera @cite_28 . A couple of innovative authentication forms were built on implicit memory of past information, such as events @cite_11 @cite_18 , images @cite_40 @cite_30 , and installed applications @cite_37 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_37",
"@cite_28",
"@cite_39",
"@cite_40",
"@cite_2",
"@cite_34",
"@cite_11"
],
"mid": [
"1921097329",
"2076245464",
"2029838601",
"2487484013",
"1626992774",
"2114605895",
"1483327747",
"2335819971",
"2082164775"
],
"abstract": [
"Current secure systems suffer because they neglect the importance of human factors in security. We address a fundamental weakness of knowledge-based authentication schemes, which is the human limitation to remember secure passwords. Our approach to improve the security of these systems relies on recognition-based, rather than recall-based authentication. We examine the requirements of a recognition-based authentication system and propose Deja Vu, which authenticates a user through her ability to recognize previously seen images. Deja Vu is more reliable and easier to use than traditional recall-based schemes, which require the user to precisely recall passwords or PINs. Furthermore, it has the advantage that it prevents users from choosing weak passwords and makes it difficult to write down or share passwords with others. We develop a prototype of Deja Vu and conduct a user study that compares it to traditional password and PIN authentication. Our user study shows that 90 of all participants succeeded in the authentication tests using Deja Vu while only about 70 succeeded using passwords and PINS. Our findings indicate that Deja Vu has potential applications, especially where text input is hard (e.g., PDAs or ATMs), or in situations where passwords are infrequently used (e.g., web site passwords).",
"In this paper, we present the design and evaluation of dynamic security questions for fallback authentication. In case users lose access to their device, the system asks questions about their usage behavior (e.g. calls, text messages or app usage). We performed two consecutive user studies with real users and real adversaries to identify questions that work well in the sense that they are easy to answer for the genuine user, but hard to guess for an adversary. The results show that app installations and communication are the most promising categories of questions. Using three questions from the evaluated categories was sufficient to get an accuracy of 95.5 - 100 .",
"Existing graphical passwords require users to proactively memorize their secrets and meanwhile these schemes are vulnerable to shoulder surfing attacks. We propose a novel graphical password scheme, PassApp, which utilizes users' everyday memory about installed apps on mobile devices as shared secrets. As the registration stage is no longer needed, PassApp exempts users from additional memory burden and greatly enhances user experience. Additionally, PassApp owns a large password set and only a small part of passwords may be exposed during a login. Therefore, PassApp has a natural advance on effectively resisting guessing attacks and shoulder surfing attacks. Our user studies demonstrate that PassApp performs well with a reasonable login time (7.27s) and a high success rate (95.48 ). Our security analysis shows PassApp can effectively withstand one-time shoulder surfing attacks and on average 30 times of shoulder surfing are necessary to expose all passwords.",
"As mobile technology grows rapidly, the smartphone has become indispensable for transmitting private user data, storing the sensitive corporate files, and conducting secure payment transactions. However, with mobile security research lagging, smartphones are extremely vulnerable to unauthenticated access. In this paper, we present, EyeVeri, a novel eye-movement-based authentication system for smartphone security protection. Specifically, EyeVeri tracks human eye movement through the built-in front camera and applies the signal processing and pattern matching techniques to explore volitional and non-volitional gaze patterns for access authentication. Through a comprehensive user study, EyeVeri performs well and is a promising approach for smartphone user authentication. We also discuss the evaluation results in-depth and analyze opportunities for future work.",
"Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern.",
"Knowledge-based authentication systems generally rely upon users' explicit recollection of passwords, facts, or personal preferences. These systems impose a cognitive burden that often results in forgotten secrets or secrets with poor entropy. We propose an authentication system that instead draws on implicit memory - that is, the unconscious encoding and usage of information. In such a system, a user is initially presented with images of common objects in a casual familiarization task. When the user later authenticates, she is asked to perform a task involving a set of degraded images, some of which are based upon the images in the familiarization task. The prior exposure to those images influences the user's responses in the task, thereby eliciting authentication information. We ran a user study to investigate the plausibility of our system design. Our results suggest that implicit memory has potential as a basis for low-cognitive-overhead, high-stability, knowledge-based authentication.",
"We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16 (walking) and 10.05 (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1 using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7 and 34.2 . We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9 without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones).",
"In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57 while keeping the average false acceptance rate of 4.43 . We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.",
"We explore how well the intersection between our own everyday memories and those captured by our smartphones can be used for what we call autobiographical authentication-a challenge-response authentication system that queries users about day-to-day experiences. Through three studies-two on MTurk and one field study-we found that users are good, but make systematic errors at answering autobiographical questions. Using Bayesian modeling to account for these systematic response errors, we derived a formula for computing a confidence rating that the attempting authenticator is the user from a sequence of question-answer responses. We tested our formula against five simulated adversaries based on plausible real-life counterparts. Our simulations indicate that our model of autobiographical authentication generally performs well in assigning high confidence estimates to the user and low confidence estimates to impersonating adversaries."
]
} |
1612.06209 | 2581757756 | Personal devices (e.g. laptops, tablets, and mobile phones) are conventional in daily life and have the ability to store users' private data. The security problems related to these appliances have become a primary concern for both users and researchers. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest a blurriness-based method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication challenges in the presence of a random and an informed attacker who is familiar with the environment and observed that the time required and the number of attempts are significantly higher than for the legitimate user, making it possible to detect attacks on the authentication system. | The unique characteristics of human movements can serve as an information source for user identification. Sitov ' a @cite_2 introduced a behavior-based authentication mechanism on smartphones. They record subtle prehensile movements performed by the user when grasping, holding, and tapping to interact with objects. Specifically, their proposed features are extracted from information captured by accelerometer, gyroscope, and magnetometer during or close to tap events. The authentication process was implemented and evaluated with scaled Manhattan distance, scaled Euclidean distance, and a single-class Support Vector Machine classifier. Another behaviroral feature is how human eyes react to stimuli. Using the built-in front camera on a smartphone, Song @cite_28 introduced a novel smartphone authentication mechanism based on eye movements. They track human eye movement and extract gaze patterns when the user is exposed to visual stimuli on the phone screen. | {
"cite_N": [
"@cite_28",
"@cite_2"
],
"mid": [
"2487484013",
"1483327747"
],
"abstract": [
"As mobile technology grows rapidly, the smartphone has become indispensable for transmitting private user data, storing the sensitive corporate files, and conducting secure payment transactions. However, with mobile security research lagging, smartphones are extremely vulnerable to unauthenticated access. In this paper, we present, EyeVeri, a novel eye-movement-based authentication system for smartphone security protection. Specifically, EyeVeri tracks human eye movement through the built-in front camera and applies the signal processing and pattern matching techniques to explore volitional and non-volitional gaze patterns for access authentication. Through a comprehensive user study, EyeVeri performs well and is a promising approach for smartphone user authentication. We also discuss the evaluation results in-depth and analyze opportunities for future work.",
"We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16 (walking) and 10.05 (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1 using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7 and 34.2 . We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9 without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones)."
]
} |
1612.06209 | 2581757756 | Personal devices (e.g. laptops, tablets, and mobile phones) are conventional in daily life and have the ability to store users' private data. The security problems related to these appliances have become a primary concern for both users and researchers. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest a blurriness-based method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication challenges in the presence of a random and an informed attacker who is familiar with the environment and observed that the time required and the number of attempts are significantly higher than for the legitimate user, making it possible to detect attacks on the authentication system. | The release of smartglasses such as JINS MEME @cite_7 or Google Glass has introduced new opportunities and challenges in user authentication. Li @cite_34 takes advantage of head motions when the user wears smartglasses to recognize the wearer. We usually move our heads in specific patterns when listening to music. Thus, the authors analysed these distinct movements to identify the wearers. | {
"cite_N": [
"@cite_34",
"@cite_7"
],
"mid": [
"2335819971",
"2295333746"
],
"abstract": [
"In this paper, we present the design, implementation and evaluation of a user authentication system, Headbanger, for smart head-worn devices, through monitoring the user's unique head-movement patterns in response to an external audio stimulus. Compared to today's solutions, which primarily rely on indirect authentication mechanisms via the user's smartphone, thus cumbersome and susceptible to adversary intrusions, the proposed head-movement based authentication provides an accurate, robust, light-weight and convenient solution. Through extensive experimental evaluation with 95 participants, we show that our mechanism can accurately authenticate users with an average true acceptance rate of 95.57 while keeping the average false acceptance rate of 4.43 . We also show that even simple head-movement patterns are robust against imitation attacks. Finally, we demonstrate our authentication algorithm is rather light-weight: the overall processing latency on Google Glass is around 1.9 seconds.",
"To enable precise detection of mental and physical states of users in a daily life, we have been developing an eyewear to measure eye and body movement in a unrestricted way. The horizontal and vertical EOG (electrooculogram) signals are measured and amplified with three met al dry electrodes placed near nasion and both sides of rhinion, of which positions correspond to the bridge and nose pads of eyewear, respectively. The user's mental states like drowsiness, sleepiness, fatigue, or interest to objects can be identified by the movements and blinking of the eyes extracted from the measured EOG. And the six-axis motion sensor (three-axis accelerometer and three-axis gyroscope) mounted in the eyewear measures the body motion. As the sensor located near the head is on the body axis, this eyewear is suitable to measure user's movement or shift of center of gravity during physical exercise with a high precision. The measured signals are used to extract various events of eye and body movement by the mounted microcontroller chip, or can be transmitted to the external devices via Bluetooth communication. This device can enable you to look into \"yourself\", as well as outer scenes. In this presentation, the outline of the eyewear is introduced and some possible applications are shown."
]
} |
1612.06209 | 2581757756 | Personal devices (e.g. laptops, tablets, and mobile phones) are conventional in daily life and have the ability to store users' private data. The security problems related to these appliances have become a primary concern for both users and researchers. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest a blurriness-based method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication challenges in the presence of a random and an informed attacker who is familiar with the environment and observed that the time required and the number of attempts are significantly higher than for the legitimate user, making it possible to detect attacks on the authentication system. | On the other hand, a pair of smart-glasses equipped with a near-eye screen is able to extend the interface of other devices. Winkler @cite_5 suggested to employ a numpad layout displayed only on a private near-eye display during the authentication phase. This approach is a countermeasure against smudge attacks, shoulder-surfing, and camera attacks because the user inputs the password on an almost empty phone interface. Our proposed approach provides this feature because it shows a new challenge for each log-in attempt. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2076463985"
],
"abstract": [
"This paper presents Glass Unlock, a novel concept using smart glasses for smartphone unlocking, which is theoretically secure against smudge attacks, shoulder-surfing, and camera attacks. By introducing an additional temporary secret like the layout of digits that is only shown on the private near-eye display, attackers cannot make sense of the observed input on the almost empty phone screen. We report a user study with three alternative input methods and compare them to current state-of-the-art systems. Our findings show that Glass Unlock only moderately increases authentication times and that users favor the input method yielding the slowest input times as it avoids focus switches between displays."
]
} |
1612.06209 | 2581757756 | Personal devices (e.g. laptops, tablets, and mobile phones) are conventional in daily life and have the ability to store users' private data. The security problems related to these appliances have become a primary concern for both users and researchers. In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest a blurriness-based method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication challenges in the presence of a random and an informed attacker who is familiar with the environment and observed that the time required and the number of attempts are significantly higher than for the legitimate user, making it possible to detect attacks on the authentication system. | Visual context recognition is a further area closely related to our research. Furnari @cite_25 analysed egocentric videos to facilitate the recognition of personal contexts. They recorded the videos with different camera models in various locations. Each image is represented with a holistic descriptor (or the gist'' of the scene), local features (Scale-invariant feature transform or SIFT) encoded by Improved Fisher Vectors, and values from the penultimate layer of Convolutional Neural Networks. Castro @cite_6 combined Convolutional Neural Networks and Random Decision Forests to identify human activities from egocentric images. Even though our work focuses on personal surroundings, we do not define a fixed set of contexts to be identified. In particular, our image selection method, which relies on segmentation and clustering, aims to group video frames based on their visual characteristics without prior knowledge on the contexts and contents. | {
"cite_N": [
"@cite_25",
"@cite_6"
],
"mid": [
"2240290602",
"1986136466"
],
"abstract": [
"Wearable cameras can gather first-person images of the environment, opening new opportunities for the development of systems able to assist the users in their daily life. This paper studies the problem of recognizing personal contexts from images acquired by wearable devices, which finds useful applications in daily routine analysis and stress monitoring. To assess the influence of different device-specific features, such as the Field Of View and the wearing modality, a dataset of five personal contexts is acquired using four different devices. We propose a benchmark classification pipeline which combines a one-class classifier to detect the negative samples (i.e., images not representing any of the personal contexts under analysis) with a classic one-vs-one multi-class classifier to discriminate among the contexts. Several experiments are designed to compare the performances of many state-of-the-art representations for object and scene classification when used with data acquired by different wearable devices.",
"We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07 in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data."
]
} |
1612.06062 | 2949341133 | In this work we propose a novel representation learning model which computes semantic representations for tweets accurately. Our model systematically exploits the chronologically adjacent tweets ('context') from users' Twitter timelines for this task. Further, we make our model user-aware so that it can do well in modeling the target tweet by exploiting the rich knowledge about the user such as the way the user writes the post and also summarizing the topics on which the user writes. We empirically demonstrate that the proposed models outperform the state-of-the-art models in predicting the user profile attributes like spouse, education and job by 19.66 , 2.27 and 2.22 respectively. | @cite_1 adapt Word2Vec to learn document representations which are good in predicting the words present in the document. As seen in , for short documents like tweets, the model tends to learn poor document representations as the vector relies too much on the document content, resulting in overfitting. @cite_5 learn document representations using word context (same as @cite_1 ) along with document stream context in a hierarchical fashion. This work inspired us to learn tweet representations using user specific Twitter streams. | {
"cite_N": [
"@cite_5",
"@cite_1"
],
"mid": [
"2250460709",
"2949547296"
],
"abstract": [
"We consider the problem of learning distributed representations for documents in data streams. The documents are represented as low-dimensional vectors and are jointly learned with distributed vector representations of word tokens using a hierarchical framework with two embedded neural language models. In particular, we exploit the context of documents in streams and use one of the language models to model the document sequences, and the other to model word sequences within them. The models learn continuous vector representations for both word tokens and documents such that semantically similar documents and words are close in a common vector space. We discuss extensions to our model, which can be applied to personalized recommendation and social relationship mining by adding further user layers to the hierarchy, thus learning user-specific vectors to represent individual preferences. We validated the learned representations on a public movie rating data set from MovieLens, as well as on a large-scale Yahoo News data comprising three months of user activity logs collected on Yahoo servers. The results indicate that the proposed model can learn useful representations of both documents and word tokens, outperforming the current state-of-the-art by a large margin.",
"Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks."
]
} |
1612.06140 | 2580723344 | Machine translation systems are very sensitive to the domains they were trained on. Several domain adaptation techniques have been deeply studied. We propose a new technique for neural machine translation (NMT) that we call domain control which is performed at runtime using a unique neural network covering multiple domains. The presented approach shows quality improvements when compared to dedicated domains translating on any of the covered domains and even on out-of-domain data. In addition, model parameters do not need to be re-estimated for each domain, making this effective to real use cases. Evaluation is carried out on English-to-French translation for two different testing scenarios. We first consider the case where an end-user performs translations on a known domain. Secondly, we consider the scenario where the domain is not known and predicted at the sentence level before translating. Results show consistent accuracy improvements for both conditions. | A lot a work has already been done for domain adaptation in Statistical Machine Translation. The approaches vary from in-domain data selection based methods @cite_11 @cite_3 @cite_15 to in-domain models mixture-based methods @cite_1 @cite_4 @cite_0 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_11"
],
"mid": [
"2137387514",
"2132001515",
"2117278770",
"2408503330",
"2057589672",
"2356613612"
],
"abstract": [
"The special challenge of the WMT 2007 shared task was domain adaptation. We took this opportunity to experiment with various ways of adapting a statistical machine translation systems to a special domain (here: news commentary), when most of the training data is from a different domain (here: European Parliament speeches). This paper also gives a description of the submission of the University of Edinburgh to the shared task.",
"We describe a mixture-model approach to adapting a Statistical Machine Translation System for new domains, using weights that depend on text distances to mixture components. We investigate a number of variants on this approach, including cross-domain versus dynamic adaptation; linear versus loglinear mixtures; language and translation model adaptation; different methods of assigning weights; and granularity of the source unit being adapted to. The best methods achieve gains of approximately one BLEU percentage point over a state-of-the art non-adapted baseline system.",
"We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.",
"This paper presents methods to combine large language models trained from diverse text sources and applies them to a state-ofart French–English and Arabic–English machine translation system. We show gains of over 2 BLEU points over a strong baseline by using continuous space language models in re-ranking.",
"In this paper we present a scheme to select relevant subsets of sentences from a large generic corpus such as text acquired from the web. A relative entropy (R.E) based criterion is used to incrementally select sentences whose distribution matches the domain of interest. Experimental results show that by using the proposed subset selection scheme we can get significant performance improvement in both Word Error Rate (WER) and Perplexity (PPL) over the models built from the entire web-corpus by using just 10 of the data. In addition incremental data selection enables us to achieve significant reduction in the vocabulary size as well as number of n-grams in the adapted language model. To demonstrate the gains from our method we provide a comparative analysis with a number of methods proposed in recent language modeling literature for cleaning up text.",
"In this paper we present experiments concerning translation model adaptation for statistical machine translation. We develop a method to adapt translation models using in- formation retrieval. The approach selects sentences similar to the test set to form an adapted training corpus. The method allows a better use of additionally available out-of-domain training data or finds in-domain data in a mixed corpus. The adapted translation models significantly improve the translation performance compared to competitive baseline sys- tems."
]
} |
1612.05534 | 2925483267 | We tackle the problem of a combinatorial classification of finite metric spaces via their fundamental polytopes, as suggested by Vershik in 2010. In this paper we consider a hyperplane arrangement associated to every split pseudometric and, for tree-like metrics, we study the combinatorics of its underlying matroid. We give explicit formulas for the face numbers of fundamental polytopes and Lipschitz polytopes of all tree-like metrics, and we characterize the metric trees for which the fundamental polytope is simplicial. | Lipschitz polytopes of finite metric spaces are weighted digraph polyhedra in the sense of Joswig and Loho @cite_20 , who give some general results about dimension, face structure and projections [ 2.1, 2.2, 2.6] JL but mostly focus on the case of braid cones'' which does not apply to our context. We close by mentioning that the polyhedra considered, e.g., in the above-mentioned work of Hirai [Formula (4.1)] Hirai are different from the Lipschitz polytopes we consider here: in fact, such polyhedra are (translated) zonotopes for all split-decomposable metrics [Remark 4.10] Hirai , while -- for instance -- the Lipschitz polytope of any split-decomposable metric on @math points is only a zonotope if the associated split system is compatible. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2964067385"
],
"abstract": [
"This paper is about the combinatorics of finite point configurations in the tropical projective space or, dually, of arrangements of finitely many tropical hyperplanes. Moreover, arrangements of finitely many tropical halfspaces can be considered via coarsenings of the resulting polyhedral decompositions of Rd. This leads to natural cell decompositions of the tropical projective space TPmind−1. Our method is to employ a known class of ordinary convex polyhedra naturally associated with weighted digraphs. This way we can relate to and use results from combinatorics and optimization. One outcome is the solution of a conjecture of Develin and Yu (2007)."
]
} |
1612.05675 | 2586000404 | Software deobfuscation is a crucial activity in security analysis and especially, in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed has an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses certain kinds of questions encountered by a reverser namely feasibility questions. Many issues arising during reverse, e.g. detecting protection schemes such as opaque predicates fall into the category of infeasibility questions. In this article, we present the Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware -- allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries. | DSE and deobfuscation Dynamic Symbolic Execution has been used in multiple situations to address obfuscation, generally for discovering new paths in the code to analyze. Recently, Debray at al. @cite_49 @cite_16 used DSE against conditional and indirect jumps, VM and return-oriented programming on various packers and malware in order to prune the obfuscation from the CFG. Mizuhito also addressed exception-based obfuscation using such techniques @cite_34 . Recent work from Ming @cite_4 used (forward) DSE to detect different classes of opaque predicates. Yet, their technique has difficulties to scale due to the trace length (this is consistent with experiments in Section ). Indeed, by doing it in a forward manner they needlessly have to deal with the whole path predicate for each predicate to check. As consequence they make use of taint to counterbalance which far from being perfect brings additional problems (under-tainting over-tainting). | {
"cite_N": [
"@cite_34",
"@cite_16",
"@cite_4",
"@cite_49"
],
"mid": [
"",
"1538186256",
"2008453980",
"2010417554"
],
"abstract": [
"",
"Malicious software are usually obfuscated to avoid detection and resist analysis. When new malware is encountered, such obfuscations have to be penetrated or removed (\"deobfuscated\") in order to understand the internal logic of the code and devise countermeasures. This paper discusses a generic approach for deobfuscation of obfuscated executable code. Our approach does not make any assumptions about the nature of the obfuscations used, but instead uses semantics-preserving program transformations to simplify away obfuscation code. We have applied a prototype implementation of our ideas to a variety of different kinds of obfuscation, including emulation-based obfuscation, emulation-based obfuscation with runtime code unpacking, and return-oriented programming. Our experimental results are encouraging and suggest that this approach can be effective in extracting the internal logic from code obfuscated using a variety of obfuscation techniques, including tools such as Themida that previous approaches could not handle.",
"Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.",
"Symbolic and concolic execution nd important applications in a number of security-related program analyses, including analysis of malicious code. However, malicious code tend to very often be obfuscated, and current concolic analysis techniques have trouble dealing with some of these obfuscations, leading to imprecision and or excessive resource usage. This paper discusses three such obfuscations: two of these are already found in obfuscation tools used by malware, while the third is a simple variation on an existing obfuscation technique. We show empirically that existing symbolic analyses are not robust against such obfuscations, and propose ways in which the problems can be mitigated using a combination of ne-grained bit-level taint analysis and architecture-aware constraint generations. Experimental results indicate that our approach is eective in allowing symbolic and concolic execution to handle such obfuscations."
]
} |
1612.05675 | 2586000404 | Software deobfuscation is a crucial activity in security analysis and especially, in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed has an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses certain kinds of questions encountered by a reverser namely feasibility questions. Many issues arising during reverse, e.g. detecting protection schemes such as opaque predicates fall into the category of infeasibility questions. In this article, we present the Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware -- allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries. | Disassembly Standard disassembly techniques have already been discussed in Section . Advanced static techniques include recursive-like approaches extended with patterns dedicated to difficult constructs @cite_12 . Advanced dynamic techniques take advantage of DSE in order to discover more parts of the code @cite_38 @cite_52 . Binary-level semantic program analysis methods @cite_5 @cite_50 @cite_46 @cite_21 @cite_30 does allow in principle a guaranteed exhaustive disassembly. Even if some interesting case-studies have been conducted, these methods still face big issues in terms of scaling and robustness. Especially, self-modification is very hard to deal with. The domain is recent, and only very few work exist in that direction @cite_33 @cite_37 . Several works attempt to combine static analysis and dynamic analysis in order to get better disassembly. Especially, @cite_45 take advantage of the dynamic trace to perform syntactic static disassembly of self-modifying programs. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_33",
"@cite_21",
"@cite_52",
"@cite_45",
"@cite_50",
"@cite_5",
"@cite_46",
"@cite_12"
],
"mid": [
"",
"2145083214",
"",
"",
"",
"2095997776",
"",
"2118246192",
"2030906223",
"2160248455",
""
],
"abstract": [
"",
"Automatic analysis of malicious binaries is necessary in order to scale with the rapid development and recovery of malware found in the wild. The results of automatic analysis are useful for creating defense systems and understanding the current capabilities of attackers. We propose an approach for automatic dissection of malicious binaries which can answer fundamental questions such as what behavior they exhibit, what are the relationships between their inputs and outputs, and how an attacker may be using the binary. We implement our approach in a system called BitScope. At the core of BitScope is a system which allows us to execute binaries with symbolic inputs. Executing with symbolic inputs allows us to reason about code paths without constraining the analysis to a particula r input value.",
"",
"",
"",
"Verification is usually performed on a high-level view of the software, either specification or program source code. However, in certain circumstances verification is more relevant when performed at the machine-code level. This paper focuses on automatic test data generation from a stand-alone executable. Low-level analysis is much more difficult than high-level analysis since even the control-flow graph is not available and bit-level instructions have to be modelled faithfully. The paper shows how ‘path-based’ structural test data generation can be adapted from structured language to machine code, using both state-of-the-art technologies and innovative techniques. The results have been implemented in a tool named OSMOSE and encouraging experiments have been conducted. Copyright © 2010 John Wiley & Sons, Ltd. (This paper is an extended version of results presented at ICST 2008 1.)",
"",
"Most closed source drivers installed on desktop systems today have never been exposed to formal analysis. Without vendor support, the only way to make these often hastily written, yet critical programs accessible to static analysis is to directly work at the binary level. In this paper, we describe a full architecture to perform static analysis on binaries that does not rely on unsound external components such as disassemblers. To precisely calculate data and function pointers without any type information, we introduce Bounded Address Tracking, an abstract domain that is tailored towards machine code and is path sensitive up to a tunable bound assuring termination. We implemented Bounded Address Tracking in our binary analysis platform Jakstab and used it to verify API specifications on several Windows device drivers. Even without assumptions about executable layout and procedures as made by state of the art approaches [1], we achieve more precise results on a set of drivers from the Windows DDK. Since our technique does not require us to compile drivers ourselves, we also present results from analyzing over 300 closed source drivers.",
"Over the last seven years, we have developed static-analysis methods to recover a good approximation to the variables and dynamically allocated memory objects of a stripped executable, and to track the flow of values through them. The article presents the algorithms that we developed, explains how they are used to recover Intermediate Representations (IRs) from executables that are similar to the IRs that would be available if one started from source code, and describes their application in the context of program understanding and automated bug hunting. Unlike algorithms for analyzing executables that existed prior to our work, the ones presented in this article provide useful information about memory accesses, even in the absence of debugging information. The ideas described in the article are incorporated in a tool for analyzing Intel x86 executables, called CodeSurfer x86. CodeSurfer x86 builds a system dependence graph for the program, and provides a GUI for exploring the graph by (i) navigating its edges, and (ii) invoking operations, such as forward slicing, backward slicing, and chopping, to discover how parts of the program can impact other parts. To assess the usefulness of the IRs recovered by CodeSurfer x86 in the context of automated bug hunting, we built a tool on top of CodeSurfer x86, called Device-Driver Analyzer for x86 (DDA x86), which analyzes device-driver executables for bugs. Without the benefit of either source code or symbol-table debugging information, DDA x86 was able to find known bugs (that had been discovered previously by source-code analysis tools), along with useful error traces, while having a low false-positive rate. DDA x86 is the first known application of program analysis verification techniques to industrial executables.",
"While the reconstruction of the control-flow graph of a binary has received wide attention, the challenge of categorizing code into defect-free and possibly incorrect remains a challenge for current static analyses. We present the intermediate language RREIL and a corresponding analysis framework that is able to infer precise numeric information on variables without resorting to an expensive analysis at the bit-level. Specifically, we propose a hierarchy of three interfaces to abstract domains, namely for inferring memory layout, bit-level information and numeric information. Our framework can be easily enriched with new abstract domains at each level. We demonstrate the extensibility of our framework by detailing a novel acceleration technique (a so-called widening) as an abstract domain that helps to find precise fix points of loops.",
""
]
} |
1612.05568 | 2565075116 | We examine a generalised randomised response (RR) technique in the context of differential privacy and examine the optimality of such mechanisms. Strict and relaxed differential privacy are considered for binary outputs. By examining the error of a statistical estimator, we present closed solutions for the optimal mechanism(s) in both cases. The optimal mechanism is also given for the specific case of the original RR technique as introduced by Warner in 1965. | The application of differential privacy to randomised response has been limited to date. @cite_16 examined using randomised response to differentially privately collect data, although their analysis only considered strict @math -differential privacy and a comparison of its efficiency with respect to the Laplace mechanism, a mechanism popular in the differential privacy literature. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2398203045"
],
"abstract": [
"This paper studies how to enforce differential privacy by using the randomized response in the data collection scenario. Given a client’s value, the randomized algorithm executed by the client reports to the untrusted server a perturbed value. The use of randomized response in surveys enables easy estimations of accurate population statistics while preserving the privacy of the individual respondents. We compare the randomized response with the standard Laplace mechanism which is based on query-output independent adding of Laplace noise. Our research starts from the simple case with one single binary attribute and extends to the general case with multiple polychotomous attributes. We measure utility preservation in terms of the mean squared error of the estimate for various calculations including individual value estimate, proportion estimate, and various derived statistics. We theoretically derive the explicit formula of the mean squared error of various derived statistics based on the randomized response theory and prove the randomized response outperforms the Laplace mechanism. We evaluate our algorithms on YesiWell database including sensitive biomarker data and social network relationships of patients. Empirical evaluation results show effectiveness of our proposed techniques. Especially the use of the randomized response for collecting data incurs fewer utility loss than the output perturbation when the sensitivity of functions is high."
]
} |
1612.05028 | 2585619359 | Ontohub is a repository engine for managing distributed heterogeneous ontologies. The distributed nature enables communities to share and exchange their contributions easily. The heterogeneous nature makes it possible to integrate ontologies written in various ontology languages. Ontohub supports a wide range of formal logical and ontology languages, as well as various structuring and modularity constructs and inter-theory (concept) mappings, building on the OMG-standardized DOL language. Ontohub repositories are organised as Git repositories, thus inheriting all features of this popular version control system. Moreover, Ontohub is the first repository engine meeting a substantial amount of the requirements formulated in the context of the Open Ontology Repository (OOR) initiative, including an API for federation as well as support for logical inference and axiom selection. | Existing ontology resources on the web include search engines like Swoogle, Watson, and Sindice. They concentrate on (full-text and structured) search and querying. Ontology repositories also provide persistent storage and maintenance. TONES @cite_1 is a repository for OWL @cite_2 ontologies that provides some metrics, as well as an OWL sublanguage analysis. BioPortal @cite_13 is a repository that originates in the biomedical domain, but now has instances for various domains. Beyond browsing and searching, it provides means for commenting and aligning ontologies. Besides OWL, also related languages like OBO @cite_11 are supported. The NeOn Toolkit @cite_12 supports searching, selecting, comparing, transforming, aligning and integrating ontologies. It is based on the OWL API and is no longer actively maintained. | {
"cite_N": [
"@cite_1",
"@cite_2",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"1678959094",
"2131660156",
"2553692388",
"2113142309"
],
"abstract": [
"",
"We describe an extension of the description logic underlying OWL-DL, SHOIN, with a number of expressive means that we believe will make it more useful in practice. Roughly speaking, we extend SHOIN with all expressive means that were suggested to us by ontology developers as useful additions to OWL-DL, and which, additionally, do not affect its decidability and practicability. We consider complex role inclusion axioms of the form R o S ⊑ R or S o R ⊑ R to express propagation of one property along another one, which have proven useful in medical terminologies. Furthermore, we extend SHOIN with reflexive, antisymmetric, and irreflexive roles, disjoint roles, a universal role, and constructs ∃R. Self, allowing, for instance, the definition of concepts such as a \"narcist\". Finally, we consider negated role assertions in Aboxes and qualified number restrictions. The resulting logic is called SROIQ. We present a rather elegant tableau-based reasoning algorithm: it combines the use of automata to keep track of universal value restrictions with the techniques developed for SHOIQ. The logic SROIQ has been adopted as the logical basis for the next iteration of OWL, OWL 1.1.",
"Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing and decision support. BioPortal (http: bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protege frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers ‘one-stop shopping’ to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources.",
"Ontologies are considered a key technology enabling semantic interoperability and integration of data and processes. We are now entering a phase of knowledge system development, in which ontologies are produced in larger numbers and exhibit greater complexity. Also we have witnessed an increasing need for technologies that enable the reuse of existing (Semantic) Web resources from within the ontology engineering environment itself. In the NeOn project we aim at advancing the state of the art in using ontologies for large-scale semantic applications. This is realized by providing an infrastructure for networked ontology management and engineering capable of suiting the community’s needs [1]. The heart of this infrastructure is the NeOn Toolkit for engineering contextualized networked ontologies and semantic applications. The shift from closed semantic applications, characterized by a monolithic corporate ontology, to open semantic applications, characterized by networks of ontologies, implies that a number of other aspects like dealing with context, collaboration or data and web integration become crucial. The NeOn Toolkit was built with the web-centric elements of semantic technologies in mind: much like Web 2.0 environments emphasize distributed content production, the NeOn Toolkit features methods and tools for managing knowledge that is distributed, heterogeneous, contextualized, and developed collaboratively.",
"The value of any kind of data is greatly enhanced when it exists in a form that allows it to be integrated with other data. One approach to integration is through the annotation of multiple bodies of data using common controlled vocabularies or 'ontologies'. Unfortunately, the very success of this approach has led to a proliferation of ontologies, which itself creates obstacles to integration. The Open Biomedical Ontologies (OBO) consortium is pursuing a strategy to overcome this problem. Existing OBO ontologies, including the Gene Ontology, are undergoing coordinated reform, and new ontologies are being created on the basis of an evolving set of shared principles governing ontology development. The result is an expanding family of ontologies designed to be interoperable and logically well formed and to incorporate accurate representations of biological reality. We describe this OBO Foundry initiative and provide guidelines for those who might wish to become involved."
]
} |
1612.05086 | 2574140004 | Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On popular image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available. | The dynamic adaption of batch sizes has already attracted attention in other recent works. @cite_2 derive decreasing series of bounds on the gradient variance that provably yield fast convergence rates with a learning rate, showing that an increasing batch size can replace a decreasing learning rate. To realize these bounds in practice, they propose to increase the batch size by a pre-specified constant factor in each iteration, without adaptation to (an estimate of) the gradient variance. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2107438106"
],
"abstract": [
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning."
]
} |
1612.05234 | 2585343022 | We introduce the concept of a Visual Compiler that generates a scene specific pedestrian detector and pose estimator without any pedestrian observations. Given a single image and auxiliary scene information in the form of camera parameters and geometric layout of the scene, the Visual Compiler first infers geometrically and photometrically accurate images of humans in that scene through the use of computer graphics rendering. Using these renders we learn a scene-and-region specific spatially-varying fully convolutional neural network, for simultaneous detection, pose estimation and segmentation of pedestrians. We demonstrate that when real human annotated data is scarce or non-existent, our data generation strategy can provide an excellent solution for bootstrapping human detection and pose estimation. Experimental results show that our approach outperforms off-the-shelf state-of-the-art pedestrian detectors and pose estimators that are trained on real data. | The use of synthetic models has been explored for a variety of computer vision tasks, typically in the context of data augmentation or domain adaptation for object classification. Aubry @cite_32 posed object detection as a 2D-3D alignment problem and learned exemplar classifiers from 3D models to align and retrieve the models that best matches the viewpoint of 2D objects in images. Vasquez @cite_16 combined synthetic pedestrian data with real pedestrian data to generate robust real world detectors. Pishchulin @cite_12 generated pedestrian samples with realistic appearance and backgrounds while modifying body shape and pose using 3D models to augment their real training data for pose estimation. These techniques demonstrated that the performance of visual classifiers can be improved by augmenting real data with a large amount of synthetic data. We emphasize here that we operate in a different regime where for augmentation or adaptation. | {
"cite_N": [
"@cite_16",
"@cite_32",
"@cite_12"
],
"mid": [
"2033547469",
"2010625607",
"2073246097"
],
"abstract": [
"Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real-world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the data set shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"State-of-the-art methods for human detection and pose estimation require many training samples for best performance. While large, manually collected datasets exist, the captured variations w.r.t. appearance, shape and pose are often uncontrolled thus limiting the overall performance. In order to overcome this limitation we propose a new technique to extend an existing training set that allows to explicitly control pose and shape variations. For this we build on recent advances in computer graphics to generate samples with realistic appearance and background while modifying body shape and pose. We validate the effectiveness of our approach on the task of articulated human detection and articulated pose estimation. We report close to state of the art results on the popular Image Parsing [25] human pose estimation benchmark and demonstrate superior performance for articulated human detection. In addition we define a new challenge of combined articulated human detection and pose estimation in real-world scenes."
]
} |
1612.05234 | 2585343022 | We introduce the concept of a Visual Compiler that generates a scene specific pedestrian detector and pose estimator without any pedestrian observations. Given a single image and auxiliary scene information in the form of camera parameters and geometric layout of the scene, the Visual Compiler first infers geometrically and photometrically accurate images of humans in that scene through the use of computer graphics rendering. Using these renders we learn a scene-and-region specific spatially-varying fully convolutional neural network, for simultaneous detection, pose estimation and segmentation of pedestrians. We demonstrate that when real human annotated data is scarce or non-existent, our data generation strategy can provide an excellent solution for bootstrapping human detection and pose estimation. Experimental results show that our approach outperforms off-the-shelf state-of-the-art pedestrian detectors and pose estimators that are trained on real data. | There is a large body of work for pedestrian detection and human pose estimation. A complete treatment of this vast literature is beyond the scope of this paper. We instead provide a brief overview of the main techniques and focus on the most relevant state-of-the-art methods. Research on pedestrian detection is largely focused on designing better feature representations and part-based architectures. Carefully designed features @cite_23 @cite_19 @cite_2 that are computationally efficient have been the focus of much of the last decade. In contrast, modern day methods for pedestrian detection are based on carefully designed deep network architectures for feature learning @cite_11 @cite_15 . Architecturally, deformable part based methods @cite_6 @cite_7 have been the dominant method for detecting pedestrians. More recently, it has been shown that general object detection frameworks @cite_4 @cite_26 @cite_17 can also achieve competitive pedestrian detection performance. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_17",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_11"
],
"mid": [
"",
"2953106684",
"2156547346",
"",
"2153185908",
"2159386181",
"2161969291",
"2950561226",
"2950167387",
"2200528286"
],
"abstract": [
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.",
"",
"Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.",
"We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"This paper starts from the observation that multiple top performing pedestrian detectors can be modelled by using an intermediate layer filtering low-level features in combination with a boosted decision forest. Based on this observation we propose a unifying framework and experimentally explore different filter families. We report extensive results enabling a systematic analysis. Using filtered channel features we obtain top performance on the challenging Caltech and KITTI datasets, while using only HOG+LUV as low-level features. When adding optical flow features we further improve detection quality and report the best known results on the Caltech dataset, reaching 93 recall at 1 FPPI.",
"The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds.",
"Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89 , outperforming the second best method by 10 ."
]
} |
1612.05234 | 2585343022 | We introduce the concept of a Visual Compiler that generates a scene specific pedestrian detector and pose estimator without any pedestrian observations. Given a single image and auxiliary scene information in the form of camera parameters and geometric layout of the scene, the Visual Compiler first infers geometrically and photometrically accurate images of humans in that scene through the use of computer graphics rendering. Using these renders we learn a scene-and-region specific spatially-varying fully convolutional neural network, for simultaneous detection, pose estimation and segmentation of pedestrians. We demonstrate that when real human annotated data is scarce or non-existent, our data generation strategy can provide an excellent solution for bootstrapping human detection and pose estimation. Experimental results show that our approach outperforms off-the-shelf state-of-the-art pedestrian detectors and pose estimators that are trained on real data. | Interestingly, techniques for human pose estimation have been developed independently from human detection, where it is often assumed that a rough or the ground-truth location of target is available prior to pose estimation. Techniques for human pose estimation can be largely categorized into deformable parts based models @cite_21 @cite_30 @cite_3 @cite_28 , deep convolutional networks that regress from the image to the keypoint locations @cite_22 @cite_29 and methods that regress from the image to the ideal localization heat-maps @cite_18 @cite_8 @cite_27 of body parts. Toshev @cite_22 introduced one of the earliest deep learning based approaches for pose estimation, learning a regression function from the image to the part coordinates. Carreira @cite_29 introduced a similar approach that iteratively refines the prediction of part locations. Current state-of-the-art approaches for human pose estimation, Convolutional Pose Machines (CPM) @cite_8 and Stacked Hourglass Networks @cite_27 , directly regress part localization heat maps from the input image. These approaches, 1) assume that humans have been detected, at least coarsely, and 2) are trained on real annotated images spanning a range of human pose and appearance. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_27"
],
"mid": [
"2013640163",
"2174722029",
"2113325037",
"2255781698",
"2464305746",
"1537698211",
"2030536784",
"2143487029",
"2950762923"
],
"abstract": [
"We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.",
"State-of-the-art approaches for articulated human pose estimation are rooted in parts-based graphical models. These models are often restricted to tree-structured representations and simple parametric potentials in order to enable tractable inference. However, these simple dependencies fail to capture all the interactions between body parts. While models with more complex interactions can be defined, learning the parameters of these models remains challenging with intractable or approximate inference. In this paper, instead of performing inference on a learned graphical model, we build upon the inference machine framework and present a method for articulated human pose estimation. Our approach incorporates rich spatial interactions among multiple parts and information across parts of different scales. Additionally, the modular framework of our approach enables both ease of implementation without specialized optimization solvers, and efficient inference. We analyze our approach on two challenging datasets with large pose variation and outperform the state-of-the-art on these benchmarks.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"Recently, Deep Convolutional Neural Networks (DCNNs) have been applied to the task of human pose estimation, and have shown its potential of learning better feature representations and capturing contextual relationships. However, it is difficult to incorporate domain prior knowledge such as geometric relationships among body parts into DCNNs. In addition, training DCNN-based body part detectors without consideration of global body joint consistency introduces ambiguities, which increases the complexity of training. In this paper, we propose a novel end-to-end framework for human pose estimation that combines DCNNs with the expressive deformable mixture of parts. We explicitly incorporate domain prior knowledge into the framework, which greatly regularizes the learning process and enables the flexibility of our framework for loopy models or tree-structured models. The effectiveness of jointly learning a DCNN with a deformable mixture of parts model is evaluated through intensive experiments on several widely used benchmarks. The proposed approach significantly improves the performance compared with state-of-the-art approaches, especially on benchmarks with challenging articulations.",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"Typical approaches to articulated pose estimation combine spatial modelling of the human body with appearance modelling of body parts. This paper aims to push the state-of-the-art in articulated pose estimation in two ways. First we explore various types of appearance representations aiming to substantially improve the body part hypotheses. And second, we draw on and combine several recently proposed powerful ideas such as more flexible spatial models as well as image-conditioned spatial models. In a series of experiments we draw several important conclusions: (1) we show that the proposed appearance representations are complementary, (2) we demonstrate that even a basic tree-structure spatial human body model achieves state-of-the-art performance when augmented with the proper appearance representation, and (3) we show that the combination of the best performing appearance model with a flexible image-conditioned spatial model achieves the best result, significantly improving over the state of the art, on the Leeds Sports Poses'' and Parse'' benchmarks.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods."
]
} |
1612.05065 | 2577760017 | We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition. | A number of works used neural networks in the context of chord recognition. Humphrey and Bello @cite_11 applied Convolutional Neural Networks to classify major and minor chords end-to-end. Boulanger- @cite_14 , and @cite_0 explored Recurrent Neural Networks as a post-filtering method, where the former used a deep belief net, the latter a deep neural network as underlying feature extractor. All these approaches train their models to directly predict major and minor chords, and following @cite_4 , the hidden layers of these models learn a hierarchical, discriminative feature representation. However, since the models are trained to distinguish major minor chords only, they consider other chord types (such as seventh, augmented, or suspended) mapped to major minor as intra-class variation to be robust against, which will be reflected by the extracted internal features. These features might thus not be useful to recognise other chords. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_4",
"@cite_11"
],
"mid": [
"2398597216",
"2395935897",
"2163922914",
"1982446897"
],
"abstract": [
"In this paper, we present a novel architecture for audio chord estimation using a hybrid recurrent neural network. The architecture replaces hidden Markov models (HMMs) with recurrent neural network (RNN) based language models for modelling temporal dependencies between chords. We demonstrate the ability of feed forward deep neural networks (DNNs) to learn discriminative features directly from a time-frequency representation of the acoustic signal, eliminating the need for a complex feature extraction stage. For the hybrid RNN architecture, inference over the output variables of interest is performed using beam search. In addition to the hybrid model, we propose a modification to beam search using a hash table which yields improved results while reducing memory requirements by an order of magnitude, thus making the proposed model suitable for real-time applications. We evaluate our model's performance on a dataset with publicly available annotations and demonstrate that the performance is comparable to existing state of the art approaches for chord recognition.",
"In this paper, we present an audio chord recognition system based on a recurrent neural network. The audio features are obtained from a deep neural network optimized with a combination of chromagram targets and chord information, and aggregated over different time scales. Contrarily to other existing approaches, our system incorporates acoustic and musicological models under a single training objective. We devise an efficient algorithm to search for the global mode of the output distribution while taking long-term dependencies into account. The resulting method is competitive with state-of-the-art approaches on the MIREX dataset in the major minor prediction task.",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"Despite early success in automatic chord recognition, recent efforts are yielding diminishing returns while basically iterating over the same fundamental approach. Here, we abandon typical conventions and adopt a different perspective of the problem, where several seconds of pitch spectra are classified directly by a convolutional neural network. Using labeled data to train the system in a supervised manner, we achieve state of the art performance through this initial effort in an otherwise unexplored area. Subsequent error analysis provides insight into potential areas of improvement, and this approach to chord recognition shows promise for future harmonic analysis systems."
]
} |
1612.05065 | 2577760017 | We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition. | Approaches to directly learn a mapping from spectrogram to chroma include those by İzmirli and Dannenberg @cite_30 and @cite_25 . However, both learn only a linear transformation of the time-frequency representation, which limits the mapping's expressivity. Additionally, both base their mapping on a single frame, which comes with the disadvantages we outlined in the previous section. | {
"cite_N": [
"@cite_30",
"@cite_25"
],
"mid": [
"2294702833",
"1880726444"
],
"abstract": [
"We investigate the problem of matching symbolic representations directly to audio based representations for applications that use data from both domains. One such application is score alignment, which aligns a sequence of frames based on features such as chroma vectors and distance functions such as Euclidean distance. Good representations are critical, yet current systems use ad hoc constructions such as the chromagram that have been shown to work quite well. We investigate ways to learn chromagram-like representations that optimize the classification of “matching” vs. “non-matching” frame pairs of audio and MIDI. New representations learned automatically from examples not only perform better than the chromagram representation but they also reveal interesting projection structures that differ distinctly from the traditional chromagram.",
"We present an audio chord recognition system based on a generalization of the Hidden Markov Model (HMM) in which the duration of chords is explicitly considered - a type of HMM referred to as a hidden semi-Markov model, or duration-explicit HMM (DHMM). We find that such a system recognizes chords at a level consistent with the state-of-the-art systems ‐ 84.23 on Uspop dataset at the major minor level. The duration distribution is estimated from chord duration histograms on the training data. It is found that the state-of-the-art recognition result can be improved upon by using several duration distributions, which are found automatically by clustering song-level duration histograms. The paper further describes experiments which shed light on the extent to which context information, in the sense of transition matrices, is useful for the audio chord recognition task. We present evidence that the context provides surprisingly little improvement in performance, compared to isolated frame-wise recognition with simple smoothing. We discuss possible reasons for this, such as the inherent entropy of chord sequences in our training database."
]
} |
1612.05065 | 2577760017 | We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition. | In an alternative approach, apply deep learning methods to produce Tonnetz features from a spectrogram @cite_7 . Using other features than the chromagram is a promising direction, and was also explored in @cite_25 for bass notes. Most chord recognition systems however still use chromas, and more research is necessary to explore to which degree and under which circumstances Tonnetz features are favourable. | {
"cite_N": [
"@cite_25",
"@cite_7"
],
"mid": [
"1880726444",
"2137519524"
],
"abstract": [
"We present an audio chord recognition system based on a generalization of the Hidden Markov Model (HMM) in which the duration of chords is explicitly considered - a type of HMM referred to as a hidden semi-Markov model, or duration-explicit HMM (DHMM). We find that such a system recognizes chords at a level consistent with the state-of-the-art systems ‐ 84.23 on Uspop dataset at the major minor level. The duration distribution is estimated from chord duration histograms on the training data. It is found that the state-of-the-art recognition result can be improved upon by using several duration distributions, which are found automatically by clustering song-level duration histograms. The paper further describes experiments which shed light on the extent to which context information, in the sense of transition matrices, is useful for the audio chord recognition task. We present evidence that the context provides surprisingly little improvement in performance, compared to isolated frame-wise recognition with simple smoothing. We discuss possible reasons for this, such as the inherent entropy of chord sequences in our training database.",
"Temporal pitch class profiles - commonly referred to as a chromagrams - are the de facto standard signal representation for content-based methods of musical harmonic analysis, despite exhibiting a set of practical difficulties. Here, we present a novel, data-driven approach to learning a robust function that projects audio data into Tonnetz-space, a geometric representation of equal-tempered pitch intervals grounded in music theory. We apply this representation to automatic chord recognition and show that our approach out-performs the classification accuracy of previous chroma representations, while providing a mid-level feature space that circumvents challenges inherent to chroma."
]
} |
1612.05062 | 2949233461 | Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild (IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions. | The work of @cite_28 approaches the problem from a filtering perspective. After a filtering step followed by clustering, the pixels are grouped into regions of same reflectance, such that a simple shading term suffices to recover the full intrinsic decomposition. This method produces the best results on the IIW dataset but takes several minutes of processing time. In sec:filtering we build on this work and propose a filtering technique that can be applied to any other intrinsic image estimation as well. This implements the idea of grouping pixels into sets of constant reflectance. Other works consider additional knowledge in order to recover reflectance and shading, as, , multiple images of the same scene with different lighting @cite_21 @cite_5 , an interactive setting with user annotations @cite_19 @cite_37 , or an additional depth layer as input @cite_6 . | {
"cite_N": [
"@cite_37",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_19",
"@cite_5"
],
"mid": [
"2146721395",
"",
"2136748901",
"2101856619",
"",
"2220470871"
],
"abstract": [
"Separating a photograph into its reflectance and illumination intrinsic images is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, naively applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid e2ep formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.",
"",
"Intrinsic images are a useful midlevel description of scenes proposed by H.G. Barrow and J.M. Tenenbaum (1978). An image is de-composed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. We focus on a slightly, easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image? We show that this problem is still imposed and suggest approaching it as a maximum-likelihood estimation problem. Following recent work on the statistics of natural images, we use a prior that assumes that illumination images will give rise to sparse filter outputs. We show that this leads to a simple, novel algorithm for recovering reflectance images. We illustrate the algorithm's performance on real and synthetic image sequences.",
"We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.",
"",
"We present a method for intrinsic image decomposition, which aims to decompose images into reflectance and shading layers. Our input is a sequence of images with varying illumination acquired by a static camera, e.g. an indoor scene with a moving light source or an outdoor timelapse. We leverage the local color variations observed over time to infer constraints on the reflectance and solve the ill-posed image decomposition problem. In particular, we derive an adaptive local energy from the observations of each local neighborhood over time, and integrate distant pairwise constraints to enforce coherent decomposition across all surfaces with consistent shading changes. Our method is solely based on multiple observations of a Lambertian scene under varying illumination and does not require user interaction, scene geometry, or an explicit lighting model. We compare our results with several intrinsic decomposition methods on a number of synthetic and captured datasets."
]
} |
1612.05062 | 2949233461 | Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild (IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions. | The paper of @cite_27 introduced the Intrinsic Images in the Wild dataset with human annotations giving relative reflectance judgments that served as the training and test set for different learning based methods. Using this data, the work of @cite_27 was the first to compare different algorithms on a large corpus of real world scenes. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2076491823"
],
"abstract": [
"Intrinsic image decomposition separates an image into a reflectance layer and a shading layer. Automatic intrinsic image decomposition remains a significant challenge, particularly for real-world scenes. Advances on this longstanding problem have been spurred by public datasets of ground truth data, such as the MIT Intrinsic Images dataset. However, the difficulty of acquiring ground truth data has meant that such datasets cover a small range of materials and objects. In contrast, real-world scenes contain a rich range of shapes and materials, lit by complex illumination. In this paper we introduce Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. We create this benchmark through millions of crowdsourced annotations of relative comparisons of material properties at pairs of points in each scene. Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination. Given our database, we develop a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms. Intrinsic image decomposition remains a challenging problem; we release our code and database publicly to support future research on this problem, available online at http: intrinsic.cs.cornell.edu ."
]
} |
1612.05062 | 2949233461 | Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild (IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions. | Similar to our work, in the sense that a dense intrinsic decomposition is predicted, is the work of @cite_2 . A CNN is used to directly predict reflectance and shading with the objective function being the difference to ground truth decompositions. Since those are only available for the rendered dataset of , the authors report that the learned model does not generalize well to the real world images of IIW. An additional data term in the gradient domain is used by @cite_23 . They also propose to use an adversary in order to remove typical generative CNN artifacts by discriminating between generated and ground truth decompositions. Therefore, this approach has the same limitation requiring dense ground truth decompositions and no results on IIW are available. To our knowledge, there is no CNN based method that predicts a dense intrinsic decomposition and works well for images from IIW. | {
"cite_N": [
"@cite_23",
"@cite_2"
],
"mid": [
"2584900867",
"2951548216"
],
"abstract": [
"We present a new deep supervised learning method for intrinsic decomposition of a single image into its albedo and shading components. Our contributions are based on a new fully convolutional neural network that estimates absolute albedo and shading jointly. Our solution relies on a single end-to-end deep sequence of residual blocks and a perceptually-motivated metric formed by two adversarially trained discriminators. As opposed to classical intrinsic image decomposition work, it is fully data-driven, hence does not require any physical priors like shading smoothness or albedo sparsity, nor does it rely on geometric information such as depth. Compared to recent deep learning techniques, we simplify the architecture, making it easier to build and train, and constrain it to generate a valid and reversible decomposition. We rediscuss and augment the set of quantitative metrics so as to account for the more challenging recovery of non scale-invariant quantities. We train and demonstrate our architecture on the publicly available MPI Sintel dataset and its intrinsic image decomposition, show attenuated overfitting issues and discuss generalizability to other data. Results show that our work outperforms the state of the art deep algorithms both on the qualitative and quantitative aspect.",
"We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays a key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities; it produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision."
]
} |
1612.05062 | 2949233461 | Separating an image into reflectance and shading layers poses a challenge for learning approaches because no large corpus of precise and realistic ground truth decompositions exists. The Intrinsic Images in the Wild (IIW) dataset provides a sparse set of relative human reflectance judgments, which serves as a standard benchmark for intrinsic images. A number of methods use IIW to learn statistical dependencies between the images and their reflectance layer. Although learning plays an important role for high performance, we show that a standard signal processing technique achieves performance on par with current state-of-the-art. We propose a loss function for CNN learning of dense reflectance predictions. Our results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW. This sets a competitive baseline which only two other approaches surpass. We then develop a joint bilateral filtering method that implements strong prior knowledge about reflectance constancy. This filtering operation can be applied to any intrinsic image algorithm and we improve several previous results achieving a new state-of-the-art on IIW. Our findings suggest that the effect of learning-based approaches may have been over-estimated so far. Explicit prior knowledge is still at least as important to obtain high performance in intrinsic image decompositions. | The work of @cite_35 also trains a CNN from relative judgments with a ranking loss to predict pixel-wise labels, but for the application of recovering dense depth estimates. This involved the creation of a dataset with relative depth judgments in the spirit of IIW. However, in contrast to intrinsic images, it is possible to capture accurate ground truth depth for training and testing, making reflectance and shading estimation a more relevant target of learning from sparse pairwise comparisons. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2339763956"
],
"abstract": [
"This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset \"Depth in the Wild\" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild."
]
} |
1612.05054 | 2566956471 | Many time series are generated by a set of entities that interact with one another over time. This paper introduces a broad, flexible framework to learn from multiple inter-dependent time series generated by such entities. Our framework explicitly models the entities and their interactions through time. It achieves this by building on the capabilities of Recurrent Neural Networks, while also offering several ways to incorporate domain knowledge constraints into the model architecture. The capabilities of our approach are showcased through an application to weather prediction, which shows gains over strong baselines. | A similar model has been proposed for learning on spatio-temporal graphs ( @cite_11 ). Unlike our models however, the RNNs in their model cannot affect each other. In their setting, each node and each edge has an input feature vector at every time step. Since the RNNs on the edges do not take input from RNNs at nodes, the forward pass through their model is equivalent to running independent siloed RNNs, with one set of RNNs for each node. In contrast, we do not have any data input on edges. Additionally, in our case, summaries flow along each edge which are computed using hidden states of RNNs nearby and thus, all RNNs evolve together. Another assumption in their approach is that the same RNN on the edge provides input to RNNs on both sides. In our case, the edges can be directed and hence the interactions can be modeled differently. Finally we note that this model is a special case of our model with each edge replaced by a node and creating a disconnected graph with one connected component per node in the original graph. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2176353499"
],
"abstract": [
"Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatio-temporal graphs are a popular tool for imposing such high-level intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatio-temporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks."
]
} |
1612.05054 | 2566956471 | Many time series are generated by a set of entities that interact with one another over time. This paper introduces a broad, flexible framework to learn from multiple inter-dependent time series generated by such entities. Our framework explicitly models the entities and their interactions through time. It achieves this by building on the capabilities of Recurrent Neural Networks, while also offering several ways to incorporate domain knowledge constraints into the model architecture. The capabilities of our approach are showcased through an application to weather prediction, which shows gains over strong baselines. | GraphLSTMs have been proposed for semantic object parsing ( @cite_1 ). In this work, the pixels in an image are clustered into super-pixels and then a LSTM is applied at each super-pixel one by one. Unlike our case, the data being modeled is not sequential, but the summary computation has some similarities to our proposal. In their model, the LSTM at every node, also takes averaged hidden states of neighbors if that state has already been computed and ignored otherwise. Our proposal is differs from this work in terms of summary computations by incorporating (1) Equivalence classes of nodes and (2) Multiple summary types (3) Generic permutation invariant set transformations for summary computations. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2951729963"
],
"abstract": [
"By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions."
]
} |
1612.05030 | 2949954492 | Synchronous programming is a paradigm of choice for the design of safety-critical reactive systems. Runtime enforcement is a technique to ensure that the output of a black-box system satisfies some desired properties. This paper deals with the problem of runtime enforcement in the context of synchronous programs. We propose a framework where an enforcer monitors both the inputs and the outputs of a synchronous program and (minimally) edits erroneous inputs outputs in order to guarantee that a given property holds. We define enforceability conditions, develop an online enforcement algorithm, and prove its correctness. We also report on an implementation of the algorithm on top of the KIELER framework for the SCCharts synchronous language. Experimental results show that enforcement has minimal execution time overhead, which decreases proportionally with larger benchmarks. | Synthesizing enforcers from properties is an active area of research. According to how an enforcer is allowed to correct the input sequence, several RE models have been proposed. Security automata proposed by Schneider @cite_4 focus on enforcement of safety properties, where the enforcer blocks the execution when it recognizes a sequence of actions that doses not satisfy the desired property. Edit automata @cite_0 allows the enforcer to correct the input sequence by suppressing and (or) inserting events, and the RE mechanisms proposed in @cite_6 @cite_12 allows buffering events and releasing them upon observing a sequence that satisfies the desired property. Recently, compositionality of enforcers has been studied in @cite_5 . Given a set of properties over the same alphabet, the problem studied in @cite_5 addresses whether it is possible to synthesize multiple enforcers, one for each property, and whether composing enforcers (in series or in parallel) can enforce all the properties. Moreover, the enforcement framework in @cite_5 allows to buffer (delay) events. These approaches focus on uni-directional RE. | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"2028720147",
"1985929316",
"2505158018",
"1973223152"
],
"abstract": [
"",
"Runtime enforcement is a powerful technique to ensure that a program will respect a given set of properties. We extend previous work on this topic in several directions. Firstly, we propose a generic notion of enforcement monitors based on a memory device and finite sets of control states and enforcement operations. Moreover, we specify their enforcement abilities w.r.t. the general Safety-Progress classification of properties. Furthermore, we propose a systematic technique to produce a monitor from the automaton recognizing a given safety, guarantee, obligation or response property. Finally, we show that this notion of enforcement monitors is more amenable to implementation and encompasses previous runtime enforcement mechanisms.",
"A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, that is, that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.",
"Runtime enforcement is a methodology used to enforce that the output of a running system satisfies a desired property. Given a property, an enforcement monitor modifies an untrusted sequence of events into a sequence that complies to that property. In practice, we may have not one, but many properties to enforce. Moreover, new properties may arise as new capabilities are added to the system. It then becomes interesting to be able to build not a single, monolithic monitor that enforces all the properties, but rather several monitors, one for each property. The question is to what extent such monitors can be composed, and how. This is the topic of this paper. We study two monitor composition schemes, serial and parallel composition, and show that, while enforcement under these schemes is generally not compositional, it is for certain subclasses of regular properties.",
"Runtime enforcement is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties. This paper deals with runtime enforcement of timed properties by revisiting the foundations of runtime enforcement when time between events matters. We propose a new enforcement paradigm where enforcement mechanisms are time retardants: to produce a correct output sequence, additional delays are introduced between the events of the input sequence. We consider runtime enforcement of any regular timed property defined by a timed automaton. We prove the correctness of enforcement mechanisms and prove that they enjoy two usually expected features, revisited here in the context of timed properties. The first one is soundness meaning that the output sequences (eventually) satisfy the required property. The second one is transparency, meaning that input sequences are modified in a minimal way. We also introduce two new features, (i) physical constraints that describe how a time retardant is physically constrained when delaying a sequence of timed events, and (ii) optimality, meaning that output sequences are produced as soon as possible. To facilitate the adoption and implementation of enforcement mechanisms, we describe them at several complementary abstraction levels. Our enforcement mechanisms have been implemented and our experimental results demonstrate the feasibility of runtime enforcement in a timed context and the effectiveness of the mechanisms."
]
} |
1612.05030 | 2949954492 | Synchronous programming is a paradigm of choice for the design of safety-critical reactive systems. Runtime enforcement is a technique to ensure that the output of a black-box system satisfies some desired properties. This paper deals with the problem of runtime enforcement in the context of synchronous programs. We propose a framework where an enforcer monitors both the inputs and the outputs of a synchronous program and (minimally) edits erroneous inputs outputs in order to guarantee that a given property holds. We define enforceability conditions, develop an online enforcement algorithm, and prove its correctness. We also report on an implementation of the algorithm on top of the KIELER framework for the SCCharts synchronous language. Experimental results show that enforcement has minimal execution time overhead, which decreases proportionally with larger benchmarks. | Mandatory Result Automata (MRAs) @cite_14 extended edit-automata @cite_0 , by considering bi-directional runtime enforcement. Compared to the other RE frameworks such as @cite_4 @cite_6 @cite_0 @cite_12 , in MRA the focus is on handling communication between two parties. However none of the above approaches are suitable for reactive systems since halting the program and delaying actions is not suitable. This is because for reactive systems the enforcer has to react instantaneously. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_0",
"@cite_12"
],
"mid": [
"1994571662",
"",
"2028720147",
"1985929316",
"1973223152"
],
"abstract": [
"This paper presents a theory of runtime enforcement based on mechanism models called mandatory results automata (MRAs). MRAs can monitor and transform security-relevant actions and their results. The operational semantics of MRAs is simple and enables straightforward definitions of concrete MRAs. Moreover, the definitions of policies and enforcement with MRAs are simple and expressive. Putting all of these features together, we argue that MRAs make good general models of runtime mechanisms, upon which a theory of runtime enforcement can be based. We develop some enforceability theory by characterizing the policies deterministic and non-deterministic MRAs can and cannot enforce.",
"",
"Runtime enforcement is a powerful technique to ensure that a program will respect a given set of properties. We extend previous work on this topic in several directions. Firstly, we propose a generic notion of enforcement monitors based on a memory device and finite sets of control states and enforcement operations. Moreover, we specify their enforcement abilities w.r.t. the general Safety-Progress classification of properties. Furthermore, we propose a systematic technique to produce a monitor from the automaton recognizing a given safety, guarantee, obligation or response property. Finally, we show that this notion of enforcement monitors is more amenable to implementation and encompasses previous runtime enforcement mechanisms.",
"A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We begin by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, that is, that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors.",
"Runtime enforcement is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties. This paper deals with runtime enforcement of timed properties by revisiting the foundations of runtime enforcement when time between events matters. We propose a new enforcement paradigm where enforcement mechanisms are time retardants: to produce a correct output sequence, additional delays are introduced between the events of the input sequence. We consider runtime enforcement of any regular timed property defined by a timed automaton. We prove the correctness of enforcement mechanisms and prove that they enjoy two usually expected features, revisited here in the context of timed properties. The first one is soundness meaning that the output sequences (eventually) satisfy the required property. The second one is transparency, meaning that input sequences are modified in a minimal way. We also introduce two new features, (i) physical constraints that describe how a time retardant is physically constrained when delaying a sequence of timed events, and (ii) optimality, meaning that output sequences are produced as soon as possible. To facilitate the adoption and implementation of enforcement mechanisms, we describe them at several complementary abstraction levels. Our enforcement mechanisms have been implemented and our experimental results demonstrate the feasibility of runtime enforcement in a timed context and the effectiveness of the mechanisms."
]
} |
1612.05363 | 2950428484 | Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas. | There are many techniques for image generation in recent years @cite_20 @cite_26 @cite_15 @cite_14 @cite_2 @cite_21 . Radford al @cite_20 applied deep convolutional generative adversarial networks (DCGANs) to learn a hierarchy of representations from object parts to scenes for general image generation. Chen al @cite_26 introduced an information-theoretic extension to the GAN that was able to learn disentangled representations. Larsen al @cite_15 combined the VAE with the GAN to learn an embedding in which high-level abstract visual features could be modified using simple arithmetic. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_21",
"@cite_2",
"@cite_15",
"@cite_20"
],
"mid": [
"1850742715",
"2434741482",
"",
"2951523806",
"2202109488",
"2173520492"
],
"abstract": [
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"",
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1612.05363 | 2950428484 | Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas. | Our work is an independent work along with @cite_4 . In @cite_4 , Li al proposed a deep convolutional network model for identity-aware transfer of facial attributes. The differences between our work and @cite_4 are noticeable in three aspects. (1) Our method generates manipulated images using residual images which is different from @cite_4 . (2) Our method models two inverse manipulations within one single architecture by sharing the same discriminator while the work in @cite_4 treats each manipulation independently. (3) Our method does not require post-processing which is essential in @cite_4 . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2536132686"
],
"abstract": [
"This paper presents a Deep convolutional network model for Identity-Aware Transfer (DIAT) of facial attributes. Given the source input image and the reference attribute, DIAT aims to generate a facial image that owns the reference attribute as well as keeps the same or similar identity to the input image. In general, our model consists of a mask network and an attribute transform network which work in synergy to generate a photo-realistic facial image with the reference attribute. Considering that the reference attribute may be only related to some parts of the image, the mask network is introduced to avoid the incorrect editing on attribute irrelevant region. Then the estimated mask is adopted to combine the input and transformed image for producing the transfer result. For joint training of transform network and mask network, we incorporate the adversarial attribute loss, identity-aware adaptive perceptual loss, and VGG-FACE based identity loss. Furthermore, a denoising network is presented to serve for perceptual regularization to suppress the artifacts in transfer result, while an attribute ratio regularization is introduced to constrain the size of attribute relevant region. Our DIAT can provide a unified solution for several representative facial attribute transfer tasks, e.g., expression transfer, accessory removal, age progression, and gender transfer, and can be extended for other face enhancement tasks such as face hallucination. The experimental results validate the effectiveness of the proposed method. Even for the identity-related attribute (e.g., gender), our DIAT can obtain visually impressive results by changing the attribute while retaining most identity-aware features."
]
} |
1612.04609 | 2584067658 | Emoji is an essential component in dialogues which has been broadly utilized on almost all social platforms. It could express more delicate feelings beyond plain texts and thus smooth the communications between users, making dialogue systems more anthropomorphic and vivid. In this paper, we focus on automatically recommending appropriate emojis given the contextual information in multi-turn dialogue systems, where the challenges locate in understanding the whole conversations. More specifically, we propose the hierarchical long short-term memory model (H-LSTM) to construct dialogue representations, followed by a softmax classifier for emoji classification. We evaluate our models on the task of emoji classification in a real-world dataset, with some further explorations on parameter sensitivity and case study. Experimental results demonstrate that our method achieves the best performances on all evaluation metrics. It indicates that our method could well capture the contextual information and emotion flow in dialogues, which is significant for emoji recommendation. | We attempt to utilize neural networks to learn dialogue representations, among which the recurrent neural network (RNN) is naturally fit for encoding sequential inputs. @cite_1 proposes the long short-term memory network (LSTM) which aims to address the problem of gradient vanishing in RNN, while @cite_14 polishes the original LSTM model by introducing the forget gate. Recently, with the thrives in deep learning, the LSTM models have been widely utilized in various fields such as machine translation @cite_3 , natural language generation @cite_20 and machine reading @cite_4 . Inspired by the great successes in using LSTM, we introduce the hierarchical LSTM model to emoji classification for better understanding multi-turn dialogues. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_20"
],
"mid": [
"2136848157",
"2515283120",
"2064675550",
"2949888546",
"2950752421"
],
"abstract": [
"Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.",
"",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization Code for the three models described in this paper can be found at www.stanford.edu jiweil ."
]
} |
1612.04598 | 30955769 | Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2-dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality meta-model that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents. | There are approaches which aim to consolidate the different models. Seffah et al. @cite_22 applied the FCM model to the quality attribute . The developed model contains 10 factors which are subdivided into 26 criteria. For the measurement of the criteria the model provides 127 metrics. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2153186534"
],
"abstract": [
"Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems --- computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory."
]
} |
1612.04598 | 30955769 | Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2-dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality meta-model that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents. | The motivation behind this model is the high abstraction and lack of aids for the interpretation of metrics in the existing hierarchically-based models. Put somewhat differently, the description of the relation between metrics and high-level factors is missing. In addition, the relation between factors, vs. , are not described in the existing models. Seffah et al. also criticize the difficulty in determining how factors relate to each other, if a project uses different models. This complicates the selection of factors for defining high-level management goals. Therefore, in @cite_22 a consolidated model that is called ( model) is developed. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2153186534"
],
"abstract": [
"Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems --- computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory."
]
} |
1612.04598 | 30955769 | Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2-dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality meta-model that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents. | Additionally, most existing quality models fail to give a detailed account of the impact that specific criteria (or metrics) have on the user interaction. Again the ISO standard cited above is a good example for this problem, since it does not provide any explanation for the presented metrics. Although consolidated models advance on this by providing a more detailed presentation of the relations between criteria and factors, they still lack the desired degree of detail. An example is the relationship between the criterion and the factor presented in @cite_22 . Although these two items are certainly related, the precise nature of the relation is unclear. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2153186534"
],
"abstract": [
"Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems --- computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory."
]
} |
1612.04598 | 30955769 | Usability is a key quality attribute of successful software systems. Unfortunately, there is no common understanding of the factors influencing usability and their interrelations. Hence, the lack of a comprehensive basis for designing, analyzing, and improving user interfaces. This paper proposes a 2-dimensional model of usability that associates system properties with the activities carried out by the user. By separating activities and properties, sound quality criteria can be identified, thus facilitating statements concerning their interdependencies. This model is based on a tested quality meta-model that fosters preciseness and completeness. A case study demonstrates the manner by which such a model aids in revealing contradictions and omissions in existing usability standards. Furthermore, the model serves as a central and structured knowledge base for the entire quality assurance process, e.g. the automatic generation of guideline documents. | Due to a lack of clear separation of different aspect of quality most existing models exhibit inhomogeneous sets of quality criteria. An example is the set of criteria presented in @cite_22 as it mixes attributes like with mechanisms like and principles like . | {
"cite_N": [
"@cite_22"
],
"mid": [
"2153186534"
],
"abstract": [
"Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems --- computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.