aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1905.03820 | 2951270715 | We devise a cascade GAN approach to generate talking face video, which is robust to different face shapes, view angles, facial characteristics, and noisy audio conditions. Instead of learning a direct mapping from audio to video frames, we propose first to transfer audio to high-level structure, i.e., the facial landmarks, and then to generate video frames conditioned on the landmarks. Compared to a direct audio-to-image approach, our cascade approach avoids fitting spurious correlations between audiovisual signals that are irrelevant to the speech content. We, humans, are sensitive to temporal discontinuities and subtle artifacts in video. To avoid those pixel jittering problems and to enforce the network to focus on audiovisual-correlated regions, we propose a novel dynamically adjustable pixel-wise loss with an attention mechanism. Furthermore, to generate a sharper image with well-synchronized facial movements, we propose a novel regression-based discriminator structure, which considers sequence-level information along with frame-level information. Thoughtful experiments on several datasets and real-world samples demonstrate significantly better results obtained by our method than the state-of-the-art methods in both quantitative and qualitative comparisons. | The success of traditional approaches has been mainly limited to synthesizing a talking face from speech audio of a specific person @cite_18 @cite_14 @cite_37 . For example, @cite_37 synthesized a taking face of President Obama with accurate lip synchronization, given his speech audio. The mechanism is to first retrieve the best-matched lip region image from a database through audiovisual feature correlation and then compose the retrieved lip region with the original face. However, this method requires a large amount of video footage of the target person. More recently, by combining the GAN encoder-decoder structure and the data-driven training strategy, @cite_25 @cite_12 @cite_34 @cite_11 can generate arbitrary faces from arbitrary input audio. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_37",
"@cite_34",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2146991130",
"1569907127",
"2738406145",
"2963081548",
"2796931171",
"2594690981",
"2883082281"
],
"abstract": [
"In many countries, foreign movies and TV productions are dubbed, i.e., the original voice of an actor is replaced with a translation that is spoken by a dubbing actor in the country's own language. Dubbing is a complex process that requires specific translations and accurately timed recitations such that the new audio at least coarsely adheres to the mouth motion in the video. However, since the sequence of phonemes and visemes in the original and the dubbing language are different, the video-to-audio match is never perfect, which is a major source of visual discomfort. In this paper, we propose a system to alter the mouth motion of an actor in a video, so that it matches the new audio track. Our paper builds on high-quality monocular capture of 3D facial performance, lighting and albedo of the dubbing and target actors, and uses audio analysis in combination with a space-time retrieval method to synthesize a new photo-realistically rendered and highly detailed 3D shape model of the mouth region to replace the target performance. We demonstrate plausible visual quality of our results compared to footage that has been professionally dubbed in the traditional way, both qualitatively and through a user study.",
"Long short-term memory (LSTM) is a specific recurrent neural network (RNN) architecture that is designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose to use deep bidirectional LSTM (BLSTM) for audio visual modeling in our photo-real talking head system. An audio visual database of a subject's talking is firstly recorded as our training data. The audio visual stereo data are converted into two parallel temporal sequences, i.e., contextual label sequences obtained by forced aligning audio against text, and visual feature sequences by applying active-appearance-model (AAM) on the lower face region among all the training image samples. The deep BLSTM is then trained to learn the regression model by minimizing the sum of square error (SSE) of predicting visual sequence from label sequence. After testing different network topologies, we interestingly found the best network is two BLSTM layers sitting on top of one feed-forward layer on our datasets. Compared with our previous HMM-based system, the newly proposed deep BLSTM-based one is better on both objective measurement and subjective A B test.",
"Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.",
"In this paper, we consider the task: given an arbitrary audio speech and one lip image of arbitrary target identity, generate synthesized lip movements of the target identity saying the speech. To perform well, a model needs to not only consider the retention of target identity, photo-realistic of synthesized images, consistency and smoothness of lip images in a sequence, but more importantly, learn the correlations between audio speech and lip movements. To solve the collective problems, we devise a network to synthesize lip movements and propose a novel correlation loss to synchronize lip changes and speech changes. Our full model utilizes four losses for a comprehensive consideration; it is trained end-to-end and is robust to lip shapes, view angles and different facial characteristics. Thoughtful experiments on three datasets ranging from lab-recorded to lips in-the-wild show that our model significantly outperforms other state-of-the-art methods extended to this task.",
"Given an arbitrary face image and an arbitrary speech clip, the proposed work attempts to generating the talking face video with accurate lip synchronization while maintaining smooth transition of both lip and facial movement over the entire video clip. Existing works either do not consider temporal dependency on face images across different video frames thus easily yielding noticeable abrupt facial and lip movement or are only limited to the generation of talking face video for a specific person thus lacking generalization capacity. We propose a novel conditional video generation network where the audio input is treated as a condition for the recurrent adversarial network such that temporal dependency is incorporated to realize smooth transition for the lip and facial movement. In addition, we deploy a multi-task adversarial training scheme in the context of video generation to improve both photo-realism and the accuracy for lip synchronization. Finally, based on the phoneme distribution information extracted from the audio clip, we develop a sample selection method that effectively reduces the size of the training dataset without sacrificing the quality of the generated video. Extensive experiments on both controlled and uncontrolled datasets demonstrate the superiority of the proposed approach in terms of visual quality, lip sync accuracy, and smooth transition of lip and facial movement, as compared to the state-of-the-art.",
"Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets.",
"Talking face generation aims to synthesize a sequence of face images that correspond to a clip of speech. This is a challenging task because face appearance variation and semantics of speech are coupled together in the subtle movements of the talking face regions. Existing works either construct specific face appearance model on specific subjects or model the transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We find that the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. This disentangled representation has an advantage where both audio and video can serve as inputs for generation. Extensive experiments show that the proposed approach generates realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns than previous work. We also demonstrate the learned audio-visual representation is extremely useful for the tasks of automatic lip reading and audio-video retrieval."
]
} |
1905.03820 | 2951270715 | We devise a cascade GAN approach to generate talking face video, which is robust to different face shapes, view angles, facial characteristics, and noisy audio conditions. Instead of learning a direct mapping from audio to video frames, we propose first to transfer audio to high-level structure, i.e., the facial landmarks, and then to generate video frames conditioned on the landmarks. Compared to a direct audio-to-image approach, our cascade approach avoids fitting spurious correlations between audiovisual signals that are irrelevant to the speech content. We, humans, are sensitive to temporal discontinuities and subtle artifacts in video. To avoid those pixel jittering problems and to enforce the network to focus on audiovisual-correlated regions, we propose a novel dynamically adjustable pixel-wise loss with an attention mechanism. Furthermore, to generate a sharper image with well-synchronized facial movements, we propose a novel regression-based discriminator structure, which considers sequence-level information along with frame-level information. Thoughtful experiments on several datasets and real-world samples demonstrate significantly better results obtained by our method than the state-of-the-art methods in both quantitative and qualitative comparisons. | Attention mechanism is an emerging topic in natural language tasks @cite_15 and image video generation task @cite_7 @cite_17 @cite_16 @cite_30 . @cite_7 generated facial expression conditioned on action units annotations. Instead of using a basic GAN structure, they exploited a generator that regresses an attention mask and a RGB color transformation over the entire image. The attention mask defines a per-pixel intensity specifying to what extend each pixel of the original image will contribute to the final rendered image. We adopt this attention mechanism to make our network robust to visual variations and noisy audio conditions. @cite_3 observed that integrating a weighted mask into the loss function during training can improve the performance of the reconstruction network. Based on this observation, rather than using a fixed loss weights, we propose a dynamically adjustable loss by leveraging the attention mechanism to emphasize the audiovisual regions. | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_3",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2963966654",
"2883861033",
"2963342110",
"2949335953",
"",
"2950893734"
],
"abstract": [
"In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN, that conditions GANs’ generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.",
"We propose a straightforward method that simultaneously reconstructs the 3D facial structure and provides dense alignment. To achieve this, we design a 2D representation called UV position map which records the 3D shape of a complete face in UV space, then train a simple Convolutional Neural Network to regress it from a single 2D image. We also integrate a weight mask into the loss function during training to improve the performance of the network. Our method does not rely on any prior face model, and can reconstruct full facial geometry along with semantic meaning. Meanwhile, our network is very light-weighted and spends only 9.8 ms to process an image, which is extremely faster than previous works. Experiments on multiple challenging datasets show that our method surpasses other state-of-the-art methods on both reconstruction and alignment tasks by a large margin. Code is available at https: github.com YadiraF PRNet.",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"",
"In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | The standard protocol for evaluating semi-supervised learning algorithms works as such: (1) Start with a standard labeled dataset; (2) Keep only a portion of the labels (say, 10 ) on that dataset; (3) Treat the rest as unlabeled data. While this approach may not reflect realistic settings for semi-supervised learning @cite_13 , it remains the standard evaluation protocol, which we follow it in this work. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2963956526"
],
"abstract": [
"Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | Many of the initial results on semi-supervised learning with deep neural networks were based on generative models such as denoising autoencoders @cite_30 , variational autoencoders @cite_0 and generative adversarial networks @cite_37 @cite_18 . More recently, a line of research showed improved results on standard baselines by adding computed on unlabeled data. These consistency regularization losses measure discrepancy between predictions made on perturbed unlabeled data points. Additional improvements have been shown by smoothing predictions before measuring these perturbations. Approaches of these kind include @math -Model @cite_40 , Temporal Ensembling @cite_40 , Mean Teacher @cite_23 and Virtual Adversarial Training @cite_25 . Recently, fast-SWA @cite_5 showed improved results by training with cyclic learning rates and measuring discrepancy with an ensemble of predictions from multiple checkpoints. By minimizing consistency losses, these models implicitly push the decision boundary away from high-density parts of the unlabeled data. This may explain their success on typical image classification datasets, where points in each clusters typically share the same class. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_18",
"@cite_0",
"@cite_40",
"@cite_23",
"@cite_5",
"@cite_25"
],
"mid": [
"830076066",
"2412510955",
"2432004435",
"2949416428",
"2921087533",
"2592691248",
"2909869271",
"2606711863"
],
"abstract": [
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.",
"We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.",
"We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets.",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"",
"We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | Two additional important approaches for semi-supervised learning, which have shown success both in the context of deep neural networks and other types of models are Pseudo-Labeling @cite_41 , where one imputes approximate classes on unlabeled data by making predictions from a model trained only on labeled data, and conditional entropy minimization @cite_42 , where all unlabeled examples are encouraged to make confident predictions on class. | {
"cite_N": [
"@cite_41",
"@cite_42"
],
"mid": [
"2903787679",
"2145494108"
],
"abstract": [
"We study object recognition under the constraint that each object class is only represented by very few observations. Semi-supervised learning, transfer learning, and few-shot recognition all concern with achieving fast generalization with few labeled data. In this paper, we propose a generic framework that utilizes unlabeled data to aid generalization for all three tasks. Our approach is to create much more training data through label propagation from the few labeled examples to a vast collection of unannotated images. The main contribution of the paper is that we show such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. We test various combinations of supervised and unsupervised metric learning methods with various label propagation algorithms. We find that our framework is very generic without being sensitive to any specific techniques. By taking advantage of unlabeled data in this way, we achieve significant improvements on all three tasks.",
"We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | Semi-supervised learning algorithms are typically @cite_13 @cite_25 @cite_20 @cite_16 @cite_1 @cite_14 evaluated on small-scale datasets such as CIFAR-10 @cite_33 and SVHN @cite_19 . We are aware of very few examples in the literature where semi-supervised learning algorithms are evaluated on larger, more challenging datasets such as ILSVRC-2012 @cite_35 . To our knowledge, Mean Teacher @cite_23 currently holds the state-of-the-art result on ILSVRC-2012 when using only 10 of the labels. Recent concurrent work @cite_4 @cite_26 presents competitive results on ILSVRC-2012. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_26",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2117539524",
"",
"2942203175",
"",
"",
"",
"2335728318",
"2592691248",
"",
"2963956526",
"2606711863",
"2943865428"
],
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"",
"",
"",
"",
"",
"Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks.",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"",
"Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.",
"We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38 to 11 ) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | propose to train a CNN model that predicts relative location of two randomly sampled non-overlapping image patches @cite_27 . Follow-up papers @cite_11 @cite_31 generalize this idea for predicting a permutation of multiple randomly sampled and permuted patches. | {
"cite_N": [
"@cite_27",
"@cite_31",
"@cite_11"
],
"mid": [
"343636949",
"2963465221",
"2321533354"
],
"abstract": [
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9 to 2.6 in object detection on PASCAL VOC 2007.",
"We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively)."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | Beside the above patch-based methods, there are self-supervised techniques that employ image-level losses. Among those, in @cite_15 the authors propose to use grayscale image colorization as a pretext task. Another example is a pretext task @cite_9 that predicts an angle of the rotation transformation that was applied to an input image. | {
"cite_N": [
"@cite_9",
"@cite_15"
],
"mid": [
"2962742544",
"2326925005"
],
"abstract": [
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks."
]
} |
1905.03670 | 2946856970 | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10 of labels. | Some techniques go beyond solving surrogate classification tasks and enforce constraints on the representation space. A prominent example is the loss from @cite_28 that encourages the model to learn a representation that is invariant to heavy image augmentations. Another example is @cite_8 , that enforces additivity constraint on visual representation: the sum of representations of all image patches should be close to representation of the whole image. Finally, @cite_36 proposes a learning procedure that alternates between clustering images in the representation space and learning a model that assigns images to their clusters. | {
"cite_N": [
"@cite_28",
"@cite_36",
"@cite_8"
],
"mid": [
"2148349024",
"2883725317",
"2750549109"
],
"abstract": [
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).",
"Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.",
"We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks."
]
} |
1905.03711 | 2952979850 | Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images. | This line of work includes models that learn to extract a sequence of regions from the original high resolution image and only process these at high resolution. The regions are processed in a sequential manner, namely the distribution to sample the @math -th region depends on the previous @math regions. were the first to employ a recurrent neural network to predict regions of interest on the high resolution image and process them sequentially. In order to train their model, which is not differentiable, they use reinforcement learning. In parallel, proposed to additionally downsample the input image and use it to provide spatial context to the recurrent network. improved upon the previous works by using variational inference and Spatial Transformer Networks @cite_17 to solve the same optimization problem. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2951005624"
],
"abstract": [
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations."
]
} |
1905.03704 | 2943909635 | Lane detection is an important yet challenging task in autonomous driving, which is affected by many factors, e.g., light conditions, occlusions caused by other vehicles, irrelevant markings on the road and the inherent long and thin property of lanes. Conventional methods typically treat lane detection as a semantic segmentation task, which assigns a class label to each pixel of the image. This formulation heavily depends on the assumption that the number of lanes is pre-defined and fixed and no lane changing occurs, which does not always hold. To make the lane detection model applicable to an arbitrary number of lanes and lane changing scenarios, we adopt an instance segmentation approach, which first differentiates lanes and background and then classify each lane pixel into each lane instance. Besides, a multi-task learning paradigm is utilized to better exploit the structural information and the feature pyramid architecture is used to detect extremely thin lanes. Three popular lane detection benchmarks, i.e., TuSimple, CULane and BDD100K, are used to validate the effectiveness of our proposed algorithm. | To overcome these shortcomings, we follow @cite_13 and model lane detection as an instance segmentation task. More specifically, the lane detection task is divided into two sub-tasks. The first sub-task is generating a binary segmentation map which differentiates lanes and the background. The second sub-task is classifying each lane pixel into a lane instance. The light-weight network, i.e., ENet @cite_16 is used as our backbone to achieve real-time performance. What's more, to utilize the structural and contextual information, we adopt a multi-task learning paradigm in which drivable area detection and lane point regression are incorporated into the original lane detection model. Moreover, the feature pyramid architecture is utilized to detect extremely thin lanes. | {
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2419448466",
"2785872028"
],
"abstract": [
"The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.",
"Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally expensive and prone to scalability due to road scene variations. More recent approaches leverage deep learning models, trained for pixel-wise lane segmentation, even when no markings are present in the image due to their big receptive field. Despite their advantages, these methods are limited to detecting a pre-defined, fixed number of lanes, e.g. ego-lanes, and can not cope with lane changes. In this paper, we go beyond the aforementioned limitations and propose to cast the lane detection problem as an instance segmentation problem - in which each lane forms its own instance - that can be trained end-to-end. To parametrize the segmented lane instances before fitting the lane, we further propose to apply a learned perspective transformation, conditioned on the image, in contrast to a fixed \"bird's-eye view\" transformation. By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, pre-defined transformation. In summary, we propose a fast lane detection algorithm, running at 50 fps, which can handle a variable number of lanes and cope with lane changes. We verify our method on the tuSimple dataset and achieve competitive results."
]
} |
1905.03375 | 2912745432 | Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments. | While the area of collaborative filtering has long been dominated by matrix factorization approaches, recent years have witnessed a surge in deep learning approaches @cite_23 @cite_2 @cite_17 @cite_7 @cite_13 @cite_25 @cite_12 @cite_22 , spurred by their great successes in other fields. Autoencoders provide the model architecture that fits exactly the (plain-vanilla) collaborative filtering problem. While various network architectures have been explored, it was found that deep models with a large number of hidden layers typically do not obtain a notable improvement in ranking accuracy in collaborative filtering, compared to 'deep' models with only one, two or three hidden layers, e.g., @cite_2 @cite_17 @cite_13 @cite_28 , which is in stark contrast to other areas, like computer vision. A combination of deep and shallow elements in a single model was proposed in @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2475334473",
"",
"2253995343",
"",
"2963085847",
"1720514416",
"2964273061",
"",
"",
""
],
"abstract": [
"Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.",
"",
"Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.",
"",
"We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference to learn this powerful generative model. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.",
"This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.",
"This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.",
"",
"",
""
]
} |
1905.03375 | 2912745432 | Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments. | and Variants While the model @cite_15 has shown competitive empirical results in numerous papers, it is computationally expensive to train, e.g., see @cite_15 @cite_23 and Section . This has sparked follow-up work proposing various modifications. In @cite_21 , both constraints on the weight matrix (non-negativity and zero diagonal) were dropped, resulting in a regression problem with an elastic-net regularization. While competitive ranking results were obtained in @cite_21 , in the experiments in @cite_23 it was found that its performance was considerably below par. The square loss in was replaced by the logistic loss in @cite_19 , which entailed that both constraints on the weight matrix could be dropped, as argued by the authors. Moreover, the L1-norm regularization was dropped, and a user-user weight-matrix was learned instead of an item-item matrix. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_21",
"@cite_23"
],
"mid": [
"2463645429",
"1987431925",
"2125473483",
"2963085847"
],
"abstract": [
"In many personalised recommendation problems, there are examples of items users prefer or like, but no examples of items they dislike. A state-of-the-art method for such implicit feedback, or one-class collaborative filtering (OC-CF), problems is SLIM, which makes recommendations based on a learned item-item similarity matrix. While SLIM has been shown to perform well on implicit feedback tasks, we argue that it is hindered by two limitations: first, it does not produce user-personalised predictions, which hampers recommendation performance; second, it involves solving a constrained optimisation problem, which impedes fast training. In this paper, we propose LRec, a variant of SLIM that overcomes these limitations without sacrificing any of SLIM's strengths. At its core, LRec employs linear logistic regression; despite this simplicity, LRec consistently and significantly outperforms all existing methods on a range of datasets. Our results thus illustrate that the OC-CF problem can be effectively tackled via linear classification models.",
"This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an 1-norm and 2-norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods.",
"The sparse inverse covariance estimation problem arises in many statistical applications in machine learning and signal processing. In this problem, the inverse of a covariance matrix of a multivariate normal distribution is estimated, assuming that it is sparse. An l1 regularized log-determinant optimization problem is typically solved to approximate such matrices. Because of memory limitations, most existing algorithms are unable to handle large scale instances of this problem. In this paper we present a new block-coordinate descent approach for solving the problem for large-scale data sets. Our method treats the sought matrix block-by-block using quadratic approximations, and we show that this approach has advantages over existing methods in several aspects. Numerical experiments on both synthetic and real gene expression data demonstrate that our approach outperforms the existing state of the art methods, especially for large-scale problems.",
"We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference to learn this powerful generative model. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements."
]
} |
1905.03375 | 2912745432 | Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments. | Compared to @cite_15 , we dropped the constraint of non-negative weights, which we found to greatly improve ranking accuracy in our experiments (see Table and Figure ). Moreover, we did not use L1-norm regularization for computational efficiency. We also did not find sparsity to noticeably improve ranking accuracy (see Section ). The learned weight matrix @math of is dense. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1987431925"
],
"abstract": [
"This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an 1-norm and 2-norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods."
]
} |
1905.03375 | 2912745432 | Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments. | Neighborhood-based Approaches Numerous neighborhood-based approaches have been proposed in the literature (e.g., see @cite_6 @cite_18 and references therein). While model-based approaches were found to achieve better ranking accuracy on some data sets, neighborhood-based approaches dominated on others, e.g., the Million Song Data Competition on Kaggle @cite_11 @cite_20 . Essentially, the co-occurrence matrix (or some modified variant) is typically used as item-item or user-user similarity matrix in neighborhood-based methods. These approaches are usually heuristics, as the similarity matrix is not learned by optimizing an objective function (loss function or likelihood). More importantly, the closed-form solution derived in Eqs. and reveals that the of the data Gram matrix is the conceptually correct similarity matrix, In fact, inverse matrices are used in many areas, for instance, the inverse covariance matrix in the Gaussian density function or in the Mahalanobis distance. see Section for more details. This is in contrast to the typical neighborhood-based approaches, which use the data Gram-matrix without inversion. The use of the conceptually correct, inverse matrix in may explain the improvement observed in Table compared to the heuristics used by state-of-the-art neighborhood approaches. | {
"cite_N": [
"@cite_18",
"@cite_20",
"@cite_6",
"@cite_11"
],
"mid": [
"",
"2055945388",
"2027829212",
"2113479237"
],
"abstract": [
"",
"We present a simple and scalable algorithm for top-N recommendation able to deal with very large datasets and (binary rated) implicit feedback. We focus on memory-based collaborative filtering algorithms similar to the well known neighboor based technique for explicit feedback. The major difference, that makes the algorithm particularly scalable, is that it uses positive feedback only and no explicit computation of the complete (user-by-user or item-by-item) similarity matrix needs to be performed. The study of the proposed algorithm has been conducted on data from the Million Songs Dataset (MSD) challenge whose task was to suggest a set of songs (out of more than 380k available songs) to more than 100k users given half of the user listening history and complete listening history of other 1 million people. In particular, we investigate on the entire recommendation pipeline, starting from the definition of suitable similarity and scoring functions and suggestions on how to aggregate multiple ranking strategies to define the overall recommendation. The technique we are proposing extends and improves the one that already won the MSD challenge last year.",
"We study collaborative filtering for applications in which there exists for every user a set of items about which the user has given binary, positive-only feedback (one-class collaborative filtering). Take for example an on-line store that knows all past purchases of every customer. An important class of algorithms for one-class collaborative filtering are the nearest neighbors algorithms, typically divided into user-based and item-based algorithms. We introduce a reformulation that unifies user- and item-based nearest neighbors algorithms and use this reformulation to propose a novel algorithm that incorporates the best of both worlds and outperforms state-of-the-art algorithms. Additionally, we propose a method for naturally explaining the recommendations made by our algorithm and show that this method is also applicable to existing user-based nearest neighbors methods.",
"We introduce the Million Song Dataset Challenge: a large-scale, personalized music recommendation challenge, where the goal is to predict the songs that a user will listen to, given both the user's listening history and full information (including meta-data and content analysis) for all songs. We explain the taste profile data, our goals and design choices in creating the challenge, and present baseline results using simple, off-the-shelf recommendation algorithms."
]
} |
1905.03454 | 2944122252 | "Feint Attack", as a new type of APT attack, has become the focus of attention. It adopts a multi-stage attacks mode which can be concluded as a combination of virtual attacks and real attacks. Under the cover of virtual attacks, real attacks can achieve the real purpose of the attacker, as a result, it often caused huge losses inadvertently. However, to our knowledge, all previous works use common methods such as Causal-Correlation or Cased-based to detect outdated multi-stage attacks. Few attentions have been paid to detect the "Feint Attack", because the difficulty of detection lies in the diversification of the concept of "Feint Attack" and the lack of professional datasets, many detection methods ignore the semantic relationship in the attack. Aiming at the existing challenge, this paper explores a new method to solve the problem. In the attack scenario, the fuzzy clustering method based on attribute similarity is used to mine multi-stage attack chains. Then we use a few-shot deep learning algorithm (SMOTE&CNN-SVM) and bidirectional Recurrent Neural Network model (Bi-RNN) to obtain the "Feint Attack" chains. "Feint Attack" is simulated by the real attack inserted in the normal causal attack chain, and the addition of the real attack destroys the causal relationship of the original attack chain. So we used Bi-RNN coding to obtain the hidden feature of "Feint Attack" chain. In the end, our method achieved the goal to detect the "Feint Attack" accurately by using the LLDoS1.0 and LLDoS2.0 of DARPA2000 and CICIDS2017 of Canadian Institute for Cybersecurity. | The clustering alert correlation method associates alert information with some identical or similar features, that is, clustering by the similarity between alert attribute values, such as the same destination address, the same attack source, attack means, etc. @cite_8 proposed a three-layer processing framework that uses causal knowledge to correlate alerts, automatically extracts causal relationships between alerts, builds the attack scenario using Bayesian networks. And further predict the most likely next attack behavior. @cite_5 proposed approach reconstructs attack scenarios by reasoning based on the evidences in the alert stream. The main idea of the proposed approach is to identify the causal relation between alerts using their similarity. @cite_16 approach applies process mining techniques on alerts to extract information regarding the attackers behavior and the multi-stage attack strategies they adopted. The strategies are presented to the network administrator in friendly high-level visual models. Large and visually complex models that are difficult to understand are clustered into smaller, simpler and intuitive models using hierarchical clustering techniques. | {
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_8"
],
"mid": [
"2799725809",
"2772704452",
"2584841233"
],
"abstract": [
"Abstract Security information and event management (SIEM) systems receive a large number of alerts from different intrusion detection systems. They are expected, from these alerts, to make reliable and timely decisions regarding the types of ongoing attack scenarios and their priorities. However, the lack of an agreed-upon vocabulary for the representation of the domain knowledge makes it difficult for state-of-the-art SIEM systems to effectively manage these complex decisions. To overcome this problem, an ontology-based expert system approach can provide domain knowledge modeling as a foundation for disambiguation of meaning and automatic reasoning regarding ongoing attack scenarios. The proposed approach reconstructs attack scenarios by reasoning based on the evidences in the alert stream. The main idea of the proposed approach is to identify the causal relation between alerts using their similarity. This approach assumes that the similarity between two successive steps in an attack scenario is greater than that of two non-successive steps. Moreover, the similarity between the steps of the same attack scenario is greater than that between the steps of two different attack scenarios. The benefit of the proposed approach includes the fast and incremental reconstruction of known and unknown attack scenarios without expert intervention, which is an enormous step forward in developing expert and intelligent systems for cyber security. We evaluated the proposed technique by performing experiments on two known datasets: DARPA 2000 and MACCDC 2012. The results prove the advantages of the proposed approach with regard to completeness and soundness criteria.",
"Abstract Intrusion Detection Systems (IDS) are extensively used as one of the lines of defense of a network to prevent and mitigate the risks caused by security breaches. IDS provide information about the intrusive activities on a network through alerts, which security analysts manually evaluate to execute an intrusion response plan. However, one of the downsides of IDS is the large amount of alerts they raise, which makes the manual investigation of alerts a burdensome and error-prone task. In this work, we propose an approach to facilitate the investigation of huge amounts of intrusion alerts. The approach applies process mining techniques on alerts to extract information regarding the attackers behavior and the multi-stage attack strategies they adopted. The strategies are presented to the network administrator in friendly high-level visual models. Large and visually complex models that are difficult to understand are clustered into smaller, simpler and intuitive models using hierarchical clustering techniques. To evaluate the proposed approach, a real dataset of alerts from a large public University in the United States was used. We find that security visualization models created with process mining and hierarchical clustering are able to condense a huge number of alerts and provide insightful information for network IDS administrators. For instance, by analyzing the models generated during the case study, network administrators could find out important details about the attack strategies such as attack frequencies and targeted network services.",
"In order to understand the security level of an organization network, detection methods are important to tackle the probable risks of the attackers' malicious activities. Intrusion detection systems, as detection solutions of the defense in depth concept, are one of the main devices to record and analyze suspicious behaviors. Besides the benefits of these systems for security enhancement, they will bring some challenges and issues for security administrators. A large number of raw alerts generated by the intrusion detection systems clearly reflect the need for a novel proactive alert correlation framework to reduce redundant alerts, correlate security incidents, discover and model multi-step attack scenarios, and track them. Several alert correlation frameworks have been proposed in the literature, but the majority of them address the alert correlation in the offline settings. In this paper, we propose a three-phase alert correlation framework, which processes the generated alerts in real time, correlates the alerts with the aid of causal knowledge discovery to automatically extract causal relationships between alerts, constructs the attack scenarios using the Bayesian network concept, and predicts the next goal of the attacks using the creating attack prediction rules. Experimental results show that the scalable proposed framework is efficient enough in learning and detecting known and unknown multi-step attack scenarios without using any predefined knowledge. The results also show that the proposed framework perfectly estimates complex attacks before they can damage the assets of the network. Copyright © 2016 John Wiley & Sons, Ltd."
]
} |
1905.03501 | 2944338412 | Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on @math -discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations. | Some recent work focuses on making use of expert trajectories. For policy based methods, @cite_13 use GANs to imitate experts. Their work is based on IRL and aims at learning from imperfect demonstrations in environments where reward signals are sparse and rare. The first published version of AlphaGo @cite_9 , @cite_19 and @cite_14 apply BC methods to learn from expert demonstrations, and train policy functions as classification or regression tasks. These methods focus on mimicking demonstrations and rely on the perfection of demonstrations to achieve good performance. DDPGfD @cite_12 adds expert demonstrations to replay buffers of online trajectories and learns with modified DDPG losses, but demonstrations used in this work are trajectories with reward signals, which is a different setting from our work. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_19",
"@cite_13",
"@cite_12"
],
"mid": [
"2757631751",
"2257979135",
"2963099939",
"2803616302",
"2741122588"
],
"abstract": [
"Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Thus, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, and enable learning within the equivalent of a few hours of robot experience. We demonstrate successful policies for multiple complex tasks: object relocation, in-hand manipulation, tool use, and door opening.",
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.",
"Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"",
"We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to fill a replay buffer and the sampling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efficiently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Normalized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kinesthetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (flexible object) into a rigid object."
]
} |
1905.03501 | 2944338412 | Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on @math -discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations. | Zhang and Ma @cite_18 pretrain the actor-critic networks using policy based gradients. It succeeds in warming up actor-critic RL algorithms with imperfect demonstrations, but it is incompatible for value based methods. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2786800733"
],
"abstract": [
"Pretraining with expert demonstrations have been found useful in speeding up the training process of deep reinforcement learning algorithms since less online simulation data is required. Some people use supervised learning to speed up the process of feature learning, others pretrain the policies by imitating expert demonstrations. However, these methods are unstable and not suitable for actor-critic reinforcement learning algorithms. Also, some existing methods rely on the global optimum assumption, which is not true in most scenarios. In this paper, we employ expert demonstrations in a actor-critic reinforcement learning framework, and meanwhile ensure that the performance is not affected by the fact that expert demonstrations are not global optimal. We theoretically derive a method for computing policy gradients and value estimators with only expert demonstrations. Our method is theoretically plausible for actor-critic reinforcement learning algorithms that pretrains both policy and value functions. We apply our method to two of the typical actor-critic reinforcement learning algorithms, DDPG and ACER, and demonstrate with experiments that our method not only outperforms the RL algorithms without pretraining process, but also is more simulation efficient."
]
} |
1905.03501 | 2944338412 | Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on @math -discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations. | For value based methods, DQfD @cite_5 learns from expert demonstrations via IRL, with the assumption that experts are the global optimum. DQfD is obtained from a large margin IRL constraint @cite_6 and learns from demonstrations by assuming the experts are optimal. @cite_8 train DQN with expert demonstrations using BC, by applying cross-entropy loss to Q networks, to update implicit policies of Q-learning, therefore BC is one of the heuristic methods to introduce expert demonstrations. | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_8"
],
"mid": [
"2788862220",
"106792269",
""
],
"abstract": [
"Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"This paper addresses the problem of batch Reinforcement Learning with Expert Demonstrations (RLED). In RLED, the goal is to find an optimal policy of a Markov Decision Process (MDP), using a data set of fixed sampled transitions of the MDP as well as a data set of fixed expert demonstrations. This is slightly different from the batch Reinforcement Learning (RL) framework where only fixed sampled transitions of the MDP are available. Thus, the aim of this article is to propose algorithms that leverage those expert data. The idea proposed here differs from the Approximate Dynamic Programming methods in the sense that we minimize the Optimal Bellman Residual (OBR), where the minimization is guided by constraints defined by the expert demonstrations. This choice is motivated by the the fact that controlling the OBR implies controlling the distance between the estimated and optimal quality functions. However, this method presents some difficulties as the criterion to minimize is non-convex, non-differentiable and biased. Those difficulties are overcome via the embedding of distributions in a Reproducing Kernel Hilbert Space (RKHS) and a boosting technique which allows obtaining non-parametric algorithms. Finally, our algorithms are compared to the only state of the art algorithm, Approximate Policy Iteration with Demonstrations (APID) algorithm, in different experimental settings.",
""
]
} |
1905.03501 | 2944338412 | Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on @math -discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations. | @cite_0 propose a method to learn from demonstrations using reward shaping. They propose a potential function that encourages policies to learn from demonstrations and not to disturb the optimal policy of the system. However, the potential function of the method is defined to search the whole demonstration dataset each time it is called. Consequently, the method cannot scale to tasks with high-dimensional state spaces and large demonstration datasets. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2397581010"
],
"abstract": [
"Reinforcement learning describes how a learning agent can achieve optimal behaviour based on interactions with its environment and reward feedback. A limiting factor in reinforcement learning as employed in artificial intelligence is the need for an often prohibitively large number of environment samples before the agent reaches a desirable level of performance. Learning from demonstration is an approach that provides the agent with demonstrations by a supposed expert, from which it should derive suitable behaviour. Yet, one of the challenges of learning from demonstration is that no guarantees can be provided for the quality of the demonstrations, and thus the learned behavior. In this paper, we investigate the intersection of these two approaches, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping. This approach allows us to leverage human input without making an erroneous assumption regarding demonstration optimality. We show experimentally that this approach requires significantly fewer demonstrations, is more robust against suboptimality of demonstrations, and achieves much faster learning than the recently developed HAT algorithm."
]
} |
1905.03501 | 2944338412 | Pretraining reinforcement learning methods with demonstrations has been an important concept in the study of reinforcement learning since a large amount of computing power is spent on online simulations with existing reinforcement learning algorithms. Pretraining reinforcement learning remains a significant challenge in exploiting expert demonstrations whilst keeping exploration potentials, especially for value based methods. In this paper, we propose a pretraining method for soft Q-learning. Our work is inspired by pretraining methods for actor-critic algorithms since soft Q-learning is a value based algorithm that is equivalent to policy gradient. The proposed method is based on @math -discounted biased policy evaluation with entropy regularization, which is also the updating target of soft Q-learning. Our method is evaluated on various tasks from Atari 2600. Experiments show that our method effectively learns from imperfect demonstrations, and outperforms other state-of-the-art methods that learn from expert demonstrations. | @cite_3 also introduce expert trajectories to soft Q-learning process using Behavior Cloning losses, which have better results than original soft Q-learning. Since soft Q-learning is equivalent to policy gradient methods, an explicit policy function is provided with Q functions, and Behavior Cloning methods can train the explicit policy function with expert demonstrations. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2593044849"
],
"abstract": [
"We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both on- and off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks."
]
} |
1905.03297 | 2944481928 | The dearth of prescribing guidelines for physicians is one key driver of the current opioid epidemic in the United States. In this work, we analyze medical and pharmaceutical claims data to draw insights on characteristics of patients who are more prone to adverse outcomes after an initial synthetic opioid prescription. Toward this end, we propose a generative model that allows discovery from observational data of subgroups that demonstrate an enhanced or diminished causal effect due to treatment. Our approach models these sub-populations as a mixture distribution, using sparsity to enhance interpretability, while jointly learning nonlinear predictors of the potential outcomes to better adjust for confounding. The approach leads to human-interpretable insights on discovered subgroups, improving the practical utility for decision support | propose causal rule sets for discovering subgroups with enhanced treatment effect. This is the closest to and an inspiration for our work. That work seeks to learn discrete human-interpretable rules predictive of enhanced treatment effect and involves optimization by Monte Carlo methods. We consider instead a mixture of experts approach with soft assignment to groups that retains most of the interpretability but allows greater expressiveness and can be optimized via gradient methods. Our outcome model , also differs from that of @cite_16 . Most importantly, we allow nonlinearity in the form of neural networks whereas @cite_16 considers only linear models. Our model also has a single term representing the main effect of treatment whereas @cite_16 has three such terms: a population average, a subgroup term that is always active, and a subgroup term that is only active under treatment. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2765657436"
],
"abstract": [
"We introduce a novel generative model for interpretable subgroup analysis for causal inference applications, Causal Rule Sets (CRS). A CRS model uses a small set of short rules to capture a subgroup where the average treatment effect is elevated compared to the entire population. We present a Bayesian framework for learning a causal rule set. The Bayesian framework consists of a prior that favors simpler models and a Bayesian logistic regression that characterizes the relation between outcomes, attributes and subgroup membership. We find maximum a posteriori models using discrete Monte Carlo steps in the joint solution space of rules sets and parameters. We provide theoretically grounded heuristics and bounding strategies to improve search efficiency. Experiments show that the search algorithm can efficiently recover a true underlying subgroup and CRS shows consistently competitive performance compared to other state-of-the-art baseline methods."
]
} |
1905.03297 | 2944481928 | The dearth of prescribing guidelines for physicians is one key driver of the current opioid epidemic in the United States. In this work, we analyze medical and pharmaceutical claims data to draw insights on characteristics of patients who are more prone to adverse outcomes after an initial synthetic opioid prescription. Toward this end, we propose a generative model that allows discovery from observational data of subgroups that demonstrate an enhanced or diminished causal effect due to treatment. Our approach models these sub-populations as a mixture distribution, using sparsity to enhance interpretability, while jointly learning nonlinear predictors of the potential outcomes to better adjust for confounding. The approach leads to human-interpretable insights on discovered subgroups, improving the practical utility for decision support | Recent papers have proposed estimating heterogeneous individual treatment effects using neural networks or a Bayesian nonparametric method involving Gaussian processes @cite_30 . These methods rely on constructing distributional representations of the factual and counterfactual outcomes that are similar in a statistical sense. While these methods perform well on estimating heterogenous effects, they do not identify subgroups of individuals with similar treatment effects and characteristics and are thus less interpretable. This makes the application of such methods to inform policy decisions more difficult. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2964115178"
],
"abstract": [
"Predicated on the increasing abundance of electronic health records, we investigate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi-task learning framework in which factual and counterfactual outcomes are modeled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregionalization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counterfactual outcomes. We conduct experiments on observational datasets for an interventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experiments, we show that our method significantly outperforms the state-of-the-art."
]
} |
1905.03593 | 2944341834 | With over 28 million developers, success of GitHub collaborative platform is highlighted through the abundance of communication channels among contemporary software projects. Knowledge is broken into two forms and its transfer (through communication channels) can be described as externalization or combination by the SECI model of knowledge transfer of an organization. Over the years, such platforms have revolutionized the way developers work, introducing new channels to share knowledge in the form of pull requests, issues and wikis. It is unclear how these channels capture and share knowledge. In this research, our goal is to analyze how communication channels share knowledge over projects. First, using the SECI model, we are able to map how knowledge is transferred through the communication channels. Then in a large-scale topology analysis of seven library package platforms, we extracted insights of how knowledge is shared by different library ecosystems within GitHub. Using topology data analysis, we show that (i) channels tend to evolve over time and (ii) library ecosystems consistently have channels to capture new knowledge (i.e., externalization). Results from the study aid in understanding what channels are important sources of knowledge, also with insights into how projects can attract and sustain developer contributions. | In the field of Software Engineering, research into channels is based on social practices. Social practice characterizes the existence of activities which are related to each other @cite_42 . These collaborative works are conducted through (i) distributed teleo-affective structures for software design and development, (ii) shared common or specific knowledge of the software development requirements, and (iii) clear procedures and regulations governing people to accomplish specific activities https: en.wikipedia.org wiki Practice . Social practices are not confined to only industry-related practices, but more broadly, they can be implemented in open source software projects. In addition to requiring a shared understanding of the requirements to become a project member, a development of open source products performed by complying with common rules as well, and by using a shared teleo-affective. Therefore, the activities of each individual can be connected from the initial of the development to the end of the project. The example of the requirements in the open source projects is also described by Scacchi @cite_38 . This study analyzes channels from a knowledge perspective instead of social collaborations. | {
"cite_N": [
"@cite_38",
"@cite_42"
],
"mid": [
"2028561093",
"882857864"
],
"abstract": [
"Presents an initial set of findings from an empirical study of social processes, technical system configurations, organisational contexts and interrelationships that give rise to open software. The focus is directed at understanding the requirements for open software development efforts, and how the development of these requirements differs from those traditional to software engineering and requirements engineering. Four open software development communities are described, examined and compared to help discover what these differences may be. Eight kinds of software informalisms are found to play a critical role in the elicitation, analysis, specification, validation and management of requirements for developing open software systems. Subsequently, understanding the roles these software informalisms take in a new formulation of the requirements development process for open source software is the focus of the study. This focus enables the consideration of a reformulation of the requirements engineering process and its associated artefacts, or (in)formalisms, to better account for the requirements for developing open source software systems.",
"ContextMethods and processes, along with the tools to support them, are at the heart of software engineering as a discipline. However, as we all know, that often the use of the same method neither impacts software projects in a comparable manner nor the software they result in. What is lacking is an understanding of how methods affect software development. ObjectiveThe article develops a set of concepts based on the practice-concept in philosophy of sociology as a base to describe software development as social practice, and develop an understanding of methods and their application that explains the heterogeneity in the outcome. Practice here is not understood as opposed to theory, but as a commonly agreed upon way of acting that is acknowledged by the team. MethodThe article applies concepts from philosophy of sociology and social theory to describe software development and develops the concepts of method and method usage. The results and steps in the philosophical argumentation are exemplified using published empirical research. ResultsThe article develops a conceptual base for understanding software development as social and epistemic practices, and defines methods as practice patterns that need to be related to, and integrated in, an existing development practice. The application of a method is conceptualized as a development of practice. This practice is in certain aspects aligned with the description of the method, but a method always under-defines practice. The implication for research, industrial software development and teaching are indicated. ConclusionThe theoretical philosophical concepts allow the explaining of heterogeneity in application of software engineering methods in line with empirical research results."
]
} |
1905.03422 | 2944780724 | Contemporary person re-identification (Re-ID) methods usually require access to data from the deployment camera network during training in order to perform well. This is because contemporary Re-ID models trained on one dataset do not generalise to other camera networks due to the domain-shift between datasets. This requirement is often the bottleneck for deploying Re-ID systems in practical security or commercial applications as it may be impossible to collect this data in advance or prohibitively costly to annotate it. This paper alleviates this issue by proposing a simple baseline for domain generalizable (DG) person re-identification. That is, to learn a Re-ID model from a set of source domains that is suitable for application to unseen datasets out-of-the-box, without any model updating. Specifically, we observe that the domain discrepancy in Re-ID is due to style and content variance across datasets and demonstrate appropriate Instance and Feature Normalization alleviates much of the resulting domain-shift in Deep Re-ID models. Instance Normalization (IN) in early layers filters out style statistic variations and Feature Normalization (FN) in deep layers is able to further eliminate disparity in content statistics. Compared to contemporary alternatives, this approach is extremely simple to implement, while being faster to train and test, thus making it an extremely valuable baseline for implementing Re-ID in practice. With a few lines of code, it increases the rank 1 Re-ID accuracy by 11.7 , 28.9 , 10.1 and 6.3 on the VIPeR, PRID, GRID, and i-LIDS benchmarks respectively. Source code will be made available. | Domain adaption and generalization Unsupervised Domain Adaptation (UDA) alleviates domain-shift without recourse to data in the target domain. For example, by reducing the Maximum Mean Discrepancy (MMD) @cite_29 between domains @cite_9 , or training an adversarial domain-classifier @cite_6 to make different domains indistinguishable. In the community, the UDA methods typically resort to image-synthesis @cite_45 @cite_31 or focus on source-target domain alignment @cite_26 @cite_7 @cite_2 . While these methods are annotation efficient, they do require prior collection of target-domain data for training, while our method has no such requirement, making it more valuable in practice where the deployment network is not known at the time of model creation. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_9",
"@cite_29",
"@cite_6",
"@cite_45",
"@cite_2",
"@cite_31"
],
"mid": [
"2962859295",
"2907197374",
"2611292810",
"2125865219",
"2949987290",
"2799107345",
"2794651663",
""
],
"abstract": [
"",
"Person re-identification (ReID) has achieved significant improvement under the single-domain setting. However, directly exploiting a model to new domains is always faced with huge performance drop, and adapting the model to new domains without target-domain identity labels is still challenging. In this paper, we address cross-domain ReID and make contributions for both model generalization and adaptation. First, we propose Part Aligned Pooling (PAP) that brings significant improvement for cross-domain testing. Second, we design a Part Segmentation (PS) constraint over ReID feature to enhance alignment and improve model generalization. Finally, we show that applying our PS constraint to unlabeled target domain images serves as effective domain adaptation. We conduct extensive experiments between three large datasets, Market1501, CUHK03 and DukeMTMC-reID. Our model achieves state-of-the-art performance under both source-domain and cross-domain settings. For completeness, we also demonstrate the complementarity of our model to existing domain adaptation methods. The code is available at this https URL.",
"In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.",
"We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in O(m2) time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.",
"Drastic variations in illumination across surveillance cameras make the person re-identification problem extremely challenging. Current large scale re-identification datasets have a significant number of training subjects, but lack diversity in lighting conditions. As a result, a trained model requires fine-tuning to become effective under an unseen illumination condition. To alleviate this problem, we introduce a new synthetic dataset that contains hundreds of illumination conditions. Specifically, we use 100 virtual humans illuminated with multiple HDR environment maps which accurately model realistic indoor and outdoor lighting. To achieve better accuracy in unseen illumination conditions we propose a novel domain adaptation technique that takes advantage of our synthetic data and performs fine-tuning in a completely unsupervised way. Our approach yields significantly higher accuracy than semi-supervised and unsupervised state-of-the-art methods, and is very competitive with supervised techniques.",
"Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identitydiscriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.",
""
]
} |
1905.03422 | 2944780724 | Contemporary person re-identification (Re-ID) methods usually require access to data from the deployment camera network during training in order to perform well. This is because contemporary Re-ID models trained on one dataset do not generalise to other camera networks due to the domain-shift between datasets. This requirement is often the bottleneck for deploying Re-ID systems in practical security or commercial applications as it may be impossible to collect this data in advance or prohibitively costly to annotate it. This paper alleviates this issue by proposing a simple baseline for domain generalizable (DG) person re-identification. That is, to learn a Re-ID model from a set of source domains that is suitable for application to unseen datasets out-of-the-box, without any model updating. Specifically, we observe that the domain discrepancy in Re-ID is due to style and content variance across datasets and demonstrate appropriate Instance and Feature Normalization alleviates much of the resulting domain-shift in Deep Re-ID models. Instance Normalization (IN) in early layers filters out style statistic variations and Feature Normalization (FN) in deep layers is able to further eliminate disparity in content statistics. Compared to contemporary alternatives, this approach is extremely simple to implement, while being faster to train and test, thus making it an extremely valuable baseline for implementing Re-ID in practice. With a few lines of code, it increases the rank 1 Re-ID accuracy by 11.7 , 28.9 , 10.1 and 6.3 on the VIPeR, PRID, GRID, and i-LIDS benchmarks respectively. Source code will be made available. | Compared to UDA @cite_31 @cite_7 , Domain Generalisation (DG) methods aim to create models that are robust-by-design to domain-shift between training and testing. These methods tend to leverage architectures specially designed for domain-shift robustness @cite_24 @cite_16 , or propose meta-learning procedures for standard architectures @cite_25 @cite_13 . Our method is in the former category, but only requires a minor modification of standard architectures. In a context, we are only aware of DIMN @cite_8 as a contemporary attempt at the DG problem setting, which uses a meta-learning approach. Of course classic feature-engineering approaches @cite_40 are not tied to specific datasets, but these are not competitive with contemporary deep-learning based approaches. While DIMN is effective, it requires a complicated and slow meta-learning procedure for training, which limits its appeal to practitioners. Furthermore, DIMN uses dynamic model synthesis at runtime so it is not amenable to modifications for runtime scalability such as binarization, approximate nearest-neighbour search, and hashing. In contrast our carefully designed feature extractor is faster out-of-the-box, and can potentially be extended in all of these ways. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_24",
"@cite_40",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2907197374",
"",
"1852255964",
"1979260620",
"",
"1920962657",
"2963043696",
"2889965839"
],
"abstract": [
"Person re-identification (ReID) has achieved significant improvement under the single-domain setting. However, directly exploiting a model to new domains is always faced with huge performance drop, and adapting the model to new domains without target-domain identity labels is still challenging. In this paper, we address cross-domain ReID and make contributions for both model generalization and adaptation. First, we propose Part Aligned Pooling (PAP) that brings significant improvement for cross-domain testing. Second, we design a Part Segmentation (PS) constraint over ReID feature to enhance alignment and improve model generalization. Finally, we show that applying our PS constraint to unlabeled target domain images serves as effective domain adaptation. We conduct extensive experiments between three large datasets, Market1501, CUHK03 and DukeMTMC-reID. Our model achieves state-of-the-art performance under both source-domain and cross-domain settings. For completeness, we also demonstrate the complementarity of our model to existing domain adaptation methods. The code is available at this https URL.",
"",
"The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets.",
"In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.",
"",
"The problem of domain generalization is to take knowledge acquired from a number of related domains, where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. The algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization.",
"Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.",
"Training models that generalize to unseen domains at test time is a problem of fundamental importance in machine learning. In this work, we propose using regularization to capture this notion of domain generalization. We pose the problem of finding such a regularization function in a Learning to Learn (or) Meta Learning framework. The notion of domain generalization is explicitly captured by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization."
]
} |
1905.03422 | 2944780724 | Contemporary person re-identification (Re-ID) methods usually require access to data from the deployment camera network during training in order to perform well. This is because contemporary Re-ID models trained on one dataset do not generalise to other camera networks due to the domain-shift between datasets. This requirement is often the bottleneck for deploying Re-ID systems in practical security or commercial applications as it may be impossible to collect this data in advance or prohibitively costly to annotate it. This paper alleviates this issue by proposing a simple baseline for domain generalizable (DG) person re-identification. That is, to learn a Re-ID model from a set of source domains that is suitable for application to unseen datasets out-of-the-box, without any model updating. Specifically, we observe that the domain discrepancy in Re-ID is due to style and content variance across datasets and demonstrate appropriate Instance and Feature Normalization alleviates much of the resulting domain-shift in Deep Re-ID models. Instance Normalization (IN) in early layers filters out style statistic variations and Feature Normalization (FN) in deep layers is able to further eliminate disparity in content statistics. Compared to contemporary alternatives, this approach is extremely simple to implement, while being faster to train and test, thus making it an extremely valuable baseline for implementing Re-ID in practice. With a few lines of code, it increases the rank 1 Re-ID accuracy by 11.7 , 28.9 , 10.1 and 6.3 on the VIPeR, PRID, GRID, and i-LIDS benchmarks respectively. Source code will be made available. | Normalization Batch Normalization (BN) @cite_49 has become a key technique in CNN training, by standardizing input data or activations using statistics computed over examples in a mini-batch. Instance Normalization (IN) @cite_22 performs BN-like computation over a single sample. Moreover, the IBN building block recently proposed in @cite_38 enhances models' generalization ability by integrating IN and BN. A different way to combine BN and IN was put forward in @cite_43 . Recently, some effort has been made in feature normalization @cite_52 @cite_19 , mainly applying @math -norm to the feature embeddings, constraining them to unit circle. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_52",
"@cite_43",
"@cite_19",
"@cite_49"
],
"mid": [
"2884366600",
"2502312327",
"2901021011",
"2962839335",
"2790592560",
"1836465849"
],
"abstract": [
"Convolutional neural networks (CNNs) have achieved great successes in many computer vision problems. Unlike existing works that designed CNN architectures to improve performance on a single task of a single domain and not generalizable, we present IBN-Net, a novel convolutional architecture, which remarkably enhances a CNN’s modeling ability on one domain (e.g. Cityscapes) as well as its generalization capacity on another domain (e.g. GTA5) without finetuning. IBN-Net carefully integrates Instance Normalization (IN) and Batch Normalization (BN) as building blocks, and can be wrapped into many advanced deep networks to improve their performances. This work has three key contributions. (1) By delving into IN and BN, we disclose that IN learns features that are invariant to appearance changes, such as colors, styles, and virtuality reality, while BN is essential for preserving content related information. (2) IBN-Net can be applied to many advanced deep architectures, such as DenseNet, ResNet, ResNeXt, and SENet, and consistently improve their performance without increasing computational cost. (3) When applying the trained networks to new domains, e.g. from GTA5 to Cityscapes, IBN-Net achieves comparable improvements as domain adaptation methods, even without using data from the target domain. With IBN-Net, we won the 1st place on the WAD 2018 Challenge Drivable Area track, with an mIoU of 86.18 .",
"It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.",
"Unsupervised domain adaptation aims to mitigate the domain shift when transferring knowledge from a supervised source domain to an unsupervised target domain. Adversarial Feature Alignment has been successfully explored to minimize the domain discrepancy. However, existing methods are usually struggling to optimize mixed learning objectives and vulnerable to negative transfer when two domains do not share the identical label space. In this paper, we empirically reveal that the erratic discrimination of target domain mainly reflects in its much lower feature norm value with respect to that of the source domain. We present a non-parametric Adaptive Feature Norm AFN approach, which is independent of the association between label spaces of the two domains. We demonstrate that adapting feature norms of source and target domains to achieve equilibrium over a large range of values can result in significant domain transfer gains. Without bells and whistles but a few lines of code, our method largely lifts the discrimination of target domain (23.7 from the Source Only in VisDA2017) and achieves the new state of the art under the vanilla setting. Furthermore, as our approach does not require to deliberately align the feature distributions, it is robust to negative transfer and can outperform the existing approaches under the partial setting by an extremely large margin (9.8 on Office-Home and 14.1 on VisDA2017). Code is available at this https URL. We are responsible for the reproducibility of our method.",
"Real-world image recognition is often challenged by the variability of visual styles including object textures, lighting conditions, filter effects, etc. Although these variations have been deemed to be implicitly handled by more training data and deeper networks, recent advances in image style transfer suggest that it is also possible to explicitly manipulate the style information. Extending this idea to general visual recognition problems, we present Batch-Instance Normalization (BIN) to explicitly normalize unnecessary styles from images. Considering certain style features play an essential role in discriminative tasks, BIN learns to selectively normalize only disturbing styles while preserving useful styles. The proposed normalization module is easily incorporated into existing network architectures such as Residual Networks, and surprisingly improves the recognition performance in various scenarios. Furthermore, experiments verify that BIN effectively adapts to completely different tasks like object classification and style transfer, by controlling the trade-off between preserving and removing style variations.",
"We motivate and present Ring loss, a simple and elegant feature normalization approach for deep networks designed to augment standard loss functions such as Softmax. We argue that deep feature normalization is an important aspect of supervised classification problems where we require the model to represent each class in a multi-class problem equally well. The direct approach to feature normalization through the hard normalization operation results in a non-convex formulation. Instead, Ring loss applies soft normalization, where it gradually learns to constrain the norm to the scaled unit circle while preserving convexity leading to more robust features. We apply Ring loss to large-scale face recognition problems and present results on LFW, the challenging protocols of IJB-A Janus, Janus CS3 (a superset of IJB-A Janus), Celebrity Frontal-Profile (CFP) and MegaFace with 1 million distractors. Ring loss outperforms strong baselines, matches state-of-the-art performance on IJB-A Janus and outperforms all other results on the challenging Janus CS3 thereby achieving state-of-the-art. We also outperform strong baselines in handling extremely low resolution face matching.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters."
]
} |
1905.03247 | 2944459050 | Localization and tracking are two very active areas of research for robotics, automation, and the Internet-of-Things. Accurate tracking for a large number of devices usually requires deployment of substantial infrastructure (infrared tracking systems, cameras, wireless antennas, etc.), which is not ideal for inaccessible or protected environments. This paper stems from the challenge posed such environments: cover a large number of units spread over a large number of small rooms, with minimal required localization infrastructure. The idea is to accurately track the position of handheld devices or mobile robots, without interfering with its architecture. Using Ultra-Wide Band (UWB) devices, we leveraged our expertise in distributed and collaborative robotic systems to develop an novel solution requiring a minimal number of fixed anchors. We discuss a strategy to share the UWB network together with an Extended Kalman filter derivation to collaboratively locate and track UWB-equipped devices, and show results from our experimental campaign tracking visitors in the Chambord castle in France. | In order to share our UWB network without the need for a master supervision, we used two strategies: synchronization and Time-Division Multiple Access (TDMA). The former was used in several works focused on sensor network applications for multiple concurrent measurements. A popular approach, the flooding-time synchronization protocol (FTSP) @cite_22 , reaches an average time offset between arbitrary nodes of the system. However, when considering large scattered configurations, the nodes that require tight synchronization are usually the closest ones. This was addressed using gradient-based synchronization @cite_7 @cite_1 , which gives more importance to the closest nodes to minimize the offset between clocks. To the best of our knowledge, these techniques were never applied to the distributed usage of a UWB network. | {
"cite_N": [
"@cite_1",
"@cite_22",
"@cite_7"
],
"mid": [
"2171436899",
"",
"2143450555"
],
"abstract": [
"Accurately synchronized clocks are crucial for many applications in sensor networks. Existing time synchronization algorithms provide on average good synchronization between arbitrary nodes, however, as we show in this paper, close-by nodes in a network may be synchronized poorly. We propose the Gradient Time Synchronization Protocol (GTSP) which is designed to provide accurately synchronized clocks between neighbors. GTSP works in a completely decentralized fashion: Every node periodically broadcasts its time information. Synchronization messages received from direct neighbors are used to calibrate the logical clock. The algorithm requires neither a tree topology nor a reference node, which makes it robust against link and node failures. The protocol is implemented on the Mica2 platform using TinyOS. We present an evaluation of GTSP on a 20-node testbed setup and simulations on larger network topologies.",
"",
"We introduce the distributed gradient clock synchronization problem. As in traditional distributed clock synchronization, we consider a network of nodes equipped with hardware clocks with bounded drift. Nodes compute logical clock values based on their hardware clocks and message exchanges, and the goal is to synchronize the nodes' logical clocks as closely as possible, while satisfying certain validity conditions. The new feature of gradient clock synchronization (GCS for short) is to require that the skew between any two nodes' logical clocks be bounded by a nondecreasing function of the uncertainty in message delay (call this the distance) between the two nodes. That is, we require nearby nodes to be closely synchronized, and allow faraway nodes to be more loosely synchronized. We contrast GCS with traditional clock synchronization, and discuss several practical motivations for GCS, mostly arising in sensor and ad hoc networks. Our main result is that the worst case clock skew between two nodes at distance d from each other is Ω(d + log D log log D), where D is the diameter1 of the network. This means that clock synchronization is not a local property, in the sense that the clock skew between two nodes depends not only on the distance between the nodes, but also on the size of the network. Our lower bound implies, for example, that the TDMA protocol with a fixed slot granularity will fail as the network grows, even if the maximum degree of each node stays constant."
]
} |
1905.03247 | 2944459050 | Localization and tracking are two very active areas of research for robotics, automation, and the Internet-of-Things. Accurate tracking for a large number of devices usually requires deployment of substantial infrastructure (infrared tracking systems, cameras, wireless antennas, etc.), which is not ideal for inaccessible or protected environments. This paper stems from the challenge posed such environments: cover a large number of units spread over a large number of small rooms, with minimal required localization infrastructure. The idea is to accurately track the position of handheld devices or mobile robots, without interfering with its architecture. Using Ultra-Wide Band (UWB) devices, we leveraged our expertise in distributed and collaborative robotic systems to develop an novel solution requiring a minimal number of fixed anchors. We discuss a strategy to share the UWB network together with an Extended Kalman filter derivation to collaboratively locate and track UWB-equipped devices, and show results from our experimental campaign tracking visitors in the Chambord castle in France. | Works in mobile robotics have shown reliable performance of the UWB IMU combination for indoor positioning of rovers @cite_10 and quadcopters @cite_8 . However, both focused on a single robot tracking and used a centralized UWB setup, synchronized over Ethernet. | {
"cite_N": [
"@cite_10",
"@cite_8"
],
"mid": [
"2012393511",
"1521464635"
],
"abstract": [
"Abstract Indoor localization of mobile agents using wireless technologies is becoming very important in military and civil applications. This paper introduces an approach for the indoor localization of a mobile agent based on Ultra-WideBand technology using a Biased Extended Kalman Filter (EKF) as a possible technique to improve the localization. The proposed approach allows to use a low-cost IMU (inertial measurement unit) which performances are improved by a calibration procedure. The obtained results show that the filter allows to obtain better result in terms of localization due to the estimation of bias and scale factor.",
"A state estimator for a quadrocopter is presented, using measurements from an accelerometer, angular rate gyroscope, and a set of ultra-wideband ranging radios. The estimator uses an extended aerodynamic model for the quadrocopter, where the full 3D airspeed is observable through accelerometer measurements. The remaining quadrocopter states, including the yaw orientation, are rendered observable by fusing ultra-wideband range measurements, under the assumption of no wind. The estimator is implemented on a standard microcontroller using readily-available, low-cost sensors. Performance is experimentally investigated in a variety of scenarios, where the quadrocopter is flown under feedback control using the estimator output."
]
} |
1905.03247 | 2944459050 | Localization and tracking are two very active areas of research for robotics, automation, and the Internet-of-Things. Accurate tracking for a large number of devices usually requires deployment of substantial infrastructure (infrared tracking systems, cameras, wireless antennas, etc.), which is not ideal for inaccessible or protected environments. This paper stems from the challenge posed such environments: cover a large number of units spread over a large number of small rooms, with minimal required localization infrastructure. The idea is to accurately track the position of handheld devices or mobile robots, without interfering with its architecture. Using Ultra-Wide Band (UWB) devices, we leveraged our expertise in distributed and collaborative robotic systems to develop an novel solution requiring a minimal number of fixed anchors. We discuss a strategy to share the UWB network together with an Extended Kalman filter derivation to collaboratively locate and track UWB-equipped devices, and show results from our experimental campaign tracking visitors in the Chambord castle in France. | @cite_10 considered the IMU as the process input and they derived a non-linear process noise including bias. Merging the Ubisense (UWB) measurement with a low-cost IMU, they showed that the localization accuracy can be improved. @cite_8 used the IMU to estimate the drag force of quadcopters and input this measure into their EKF, together with measurements from a custom UWB radio. These examples demonstrate the high accuracy ( @math cm) of their strategy. For a collaborative strategy on multi-agent localization, the work of Prorok and Martinoli @cite_18 reached comparable accuracy ( @math cm) modeling the UWB measurement error and compensating it with relative positioning between the robots (a separated module based on infrared). Instead of an EKF, they used a particle filter, also rather commonly used with UWB. A recent study on the collaborative use of UWB showed that two-way-ranging can give better results than simple time-difference of arrival @cite_12 . Their results confirmed our design choices, as previous setup conducted with a fixed transmission scheme (not dynamic) and the tags position computed on a central computer. Finally, the design of our EKF was inspired by previous works that focused on tracking a single UWB tag @cite_19 @cite_3 . | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_10",
"@cite_12"
],
"mid": [
"2056842608",
"1521464635",
"2131756873",
"2115969888",
"2012393511",
"2422052711"
],
"abstract": [
"Ultra-wideband (UWB) localization is a recent technology that performs competitively with many indoor localization methods currently available. Despite its desirable traits, such as potential high accuracy and high material penetrability, the resolution of non-line-of-sight signals remains a very hard problem and has a significant impact on the localization performance. In this work, we address the peculiarities of UWB error behavior by building models that capture the spatiality as well as the multimodal statistics of the error behavior. Our framework utilizes tessellated maps that associate probabilistic error models to localities in space. In addition to our UWB localization strategy (which provides absolute position estimates), we investigate the effects of collaboration in the form of relative positioning. To this end, we develop a relative range and bearing model and, together with the UWB model, present a unified localization technique based on a particle filter framework. We test our approach experimentally on a group of 10 mobile robots equipped with UWB emitters and extension modules providing inter-robot relative range and bearing measurements. Our experimental insights highlight the benefits of collaboration, which are consistent over numerous experimental scenarios. Also, we show the relevance, in terms of positioning accuracy, of our multimodal UWB measurement model by performing systematic comparisons with two alternative measurement models. Our final results show median localization errors below 10 cm in cluttered environments, using a modest set of 50 particles in our filter.",
"A state estimator for a quadrocopter is presented, using measurements from an accelerometer, angular rate gyroscope, and a set of ultra-wideband ranging radios. The estimator uses an extended aerodynamic model for the quadrocopter, where the full 3D airspeed is observable through accelerometer measurements. The remaining quadrocopter states, including the yaw orientation, are rendered observable by fusing ultra-wideband range measurements, under the assumption of no wind. The estimator is implemented on a standard microcontroller using readily-available, low-cost sensors. Performance is experimentally investigated in a variety of scenarios, where the quadrocopter is flown under feedback control using the estimator output.",
"In this paper we propose a hybrid localization system combining an Ultra-Wideband localization system with inertial sensors. Algorithms for dead reckoning as well as the fusion of data provided by the UWB system and the inertial sensors are presented. Finally experimental results are shown.",
"In this paper we propose a 6DOF tracking system combining Ultra-Wideband measurements with low-cost MEMS inertial measurements. A tightly coupled system is developed which estimates position as well as orientation of the sensor-unit while being reliable in case of multipath effects and NLOS conditions. The experimental results show robust and continuous tracking in a realistic indoor positioning scenario.",
"Abstract Indoor localization of mobile agents using wireless technologies is becoming very important in military and civil applications. This paper introduces an approach for the indoor localization of a mobile agent based on Ultra-WideBand technology using a Biased Extended Kalman Filter (EKF) as a possible technique to improve the localization. The proposed approach allows to use a low-cost IMU (inertial measurement unit) which performances are improved by a calibration procedure. The obtained results show that the filter allows to obtain better result in terms of localization due to the estimation of bias and scale factor.",
"Ultra-wideband positioning systems intended for indoor applications often work in non-line of sight conditions, which result in insufficient precision and accuracy of derived localizations. One of the possible solutions is the implementation of cooperative positioning techniques. The following paper describes a cooperative ultra-wideband positioning system which calculates tag position from TDOA and distance between tags measurements. In the paper positioning system architecture is described and an exemplary transmission scheme for cooperative systems is presented. Considered localization system utilizes an Extended Kalman Filter based algorithm. The algorithm was investigated with simulations and experiments. Conducted experiment consisted in fusing results gathered from typical TDOA positioning system infrastructure and ranging results obtained with ultra-wideband radio modules. The research has shown that the use presented cooperative algorithm increases positioning precision."
]
} |
1905.03051 | 2946293979 | In recent years, Signal Temporal Logic (STL) has gained traction as a practical and expressive means of encoding control objectives for robotic and cyber-physical systems. The state-of-the-art in STL trajectory synthesis is to formulate the problem as a Mixed Integer Linear Program (MILP). The MILP approach is sound and complete for bounded specifications, but such strong correctness guarantees come at the price of exponential complexity in the number of predicates and the time bound of the specification. In this work, we propose an alternative synthesis paradigm that relies on Bayesian optimization rather than mixed integer programming. This relaxes the completeness guarantee to probabilistic completeness, but is significantly more efficient: our approach scales polynomially in the STL time-bound and linearly in the number of predicates. We prove that our approach is sound and probabilistically complete, and demonstrate its scalability with a nontrivial example. | Another promising approach is to use Satisfiability Modulo Theories (SMT) to find a feasible solution, though not necessarily the optimal one @cite_12 @cite_13 . This approach is intuitively attractive in the context of robotics, where satisfying the specification may be more desirable than finding a perfectly optimal trajectory. Early results for Linear Temporal Logic (LTL) specifications indicate good potential on a variety of interesting problems. SMT is a generalization of the NP-complete Boolean satisfiability checking problem, however, and avoiding the associated worst-case exponential complexity may be nontrivial. | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2569270512",
"2784322399"
],
"abstract": [
"We present a scalable robot motion planning algorithm for reach-avoid problems. We assume a discrete-time, linear model of the robot dynamics and a workspace described by a set of obstacles and a target region, where both the obstacles and the region are polyhedra. Our goal is to construct a trajectory, and the associated control strategy, that steers the robot from its initial point to the target while avoiding obstacles. Differently from previous approaches, based on the discretization of the continuous state space or uniform discretization of the workspace, our approach, inspired by the lazy satisfiability modulo theory paradigm, decomposes the planning problem into smaller subproblems, which can be efficiently solved using specialized solvers. At each iteration, we use a coarse, obstacle-based discretization of the workspace to obtain candidate high-level, discrete plans that solve a set of Boolean constraints, while completely abstracting the low-level continuous dynamics. The feasibility of the proposed plans is then checked via a convex program, under constraints on both the system dynamics and the control inputs, and new candidate plans are generated until a feasible one is found. To achieve scalability, we show how to generate succinct explanations for the infeasibility of a discrete plan by exploiting a relaxation of the convex program that allows detecting the earliest possible occurrence of an infeasible transition between workspace regions. Simulation results show that our algorithm favorably compares with state-of-the-art techniques and scales well for complex systems, including robot dynamics with up to 50 continuous states.",
"We present an efficient algorithm for multi-robot motion planning from linear temporal logic (LTL) specifications. We assume that the dynamics of each robot can be described by a discrete-time, linear system together with constraints on the control inputs and state variables. Given an LTL formula specifying the multi-robot mission, our goal is to construct a set of collision-free trajectories for all robots, and the associated control strategies, to satisfy We show that the motion planning problem can be formulated as the feasibility problem for a formula p over Boolean and convex constraints, respectively capturing the LTL specification and the robot dynamics. We then adopt a satisfiability modulo convex (SMC) programming approach that exploits a monotonicity property of p to decompose the problem into smaller subproblems. Simulation results show that our algorithm is more than one order of magnitude faster than state-of-the-art sampling-based techniques for high-dimensional state spaces while supporting complex missions."
]
} |
1905.03156 | 2944088136 | A design-centric modeling approach was proposed to model the behavior of the physical process controlled by an Industrial Control System (ICS) and study the cascading effects of data-oriented attacks. A threat model was used as input to guide the construction of the model where control components which are within the adversary's intent and capabilities are extracted. The relevant control components are subsequently modeled together with their control dependencies and operational design specifications. The approach was demonstrated and validated on a water treatment testbed. Attacks were simulated on the testbed model where its resilience to attacks was evaluated using proposed metrics such as Impact Ratio and Time-to-Critical-State. From the analysis of the attacks, design strengths and weaknesses were identified and design improvements were recommended to increase the testbed's resilience to attacks. | In @cite_17 , proposed a methodology of assessing the impact of attacks by measuring the cross-covariances of control variables before and after the system is perturbed. While this method provides insights on how the impact of attack propagates through the system via the relationship between control variables, it does not translate to the consequence of attacks on system performance. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2082281040"
],
"abstract": [
"The massive proliferation of information and communications technologies (hardware and software) into the heart of modern critical infrastructures has given birth to a unique technological ecosystem. Despite the many advantages brought about by modern information and communications technologies, the shift from isolated environments to \"systems-of-systems\" integrated with massive information and communications infrastructures (e.g., the Internet) exposes critical infrastructures to significant cyber threats. Therefore, it is imperative to develop approaches for identifying and ranking assets in complex, large-scale and heterogeneous critical infrastructures. To address these challenges, this paper proposes a novel methodology for assessing the impacts of cyber attacks on critical infrastructures. The methodology is inspired by research in system dynamics and sensitivity analysis. The proposed behavioral analysis methodology computes the covariances of the observed variables before and after the execution of a specific intervention involving the control variables. Metrics are proposed for quantifying the significance of control variables and measuring the impact propagation of cyber attacks.Experiments conducted on the IEEE 14-bus and IEEE 300-bus electric grid models, and on the well-known Tennessee Eastman chemical process demonstrate the efficiency, scalability and cross-sector applicability of the proposed methodology in several attack scenarios. The advantages of the methodology over graph-theoretic and electrical centrality metric approaches are demonstrated using several test cases. Finally, a novel, stealthy cyber-physical attack is demonstrated against a simulated power grid; this attack can be used to analyze the precision of anomaly detection systems."
]
} |
1905.03156 | 2944088136 | A design-centric modeling approach was proposed to model the behavior of the physical process controlled by an Industrial Control System (ICS) and study the cascading effects of data-oriented attacks. A threat model was used as input to guide the construction of the model where control components which are within the adversary's intent and capabilities are extracted. The relevant control components are subsequently modeled together with their control dependencies and operational design specifications. The approach was demonstrated and validated on a water treatment testbed. Attacks were simulated on the testbed model where its resilience to attacks was evaluated using proposed metrics such as Impact Ratio and Time-to-Critical-State. From the analysis of the attacks, design strengths and weaknesses were identified and design improvements were recommended to increase the testbed's resilience to attacks. | in @cite_4 proposed a framework to measure impact of attack on stochastic linear control systems using the infinity norm of critical states over a time window. The impact is measured by how much the critical states of the system deviates from the steady state over a period of time steps while remaining undetected by an anomaly detector. The impact metric provides information of the extent to which the system is perturbed during an attack but does not give resolution on the impact on the physical process. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2903296787"
],
"abstract": [
"Risk assessment is an inevitable step in the implementation of cost-effective security strategies for control systems. One of the difficulties of risk assessment is to estimate the impact cyber-attacks may have. This paper proposes a framework to estimate the impact of several cyber-attack strategies against a dynamical control system equipped with an anomaly detector. In particular, we consider denial of service, sign alternation, rerouting, replay, false data injection, and bias injection attack strategies. The anomaly detectors we consider are stateless, cumulative sum, and multivariate exponentially weighted moving average detectors. As a measure of the attack impact, we adopt the infinity norm of critical states after a fixed number of time steps. For this measure and the aforementioned anomaly detectors, we prove that the attack impact for all of the attack strategies can be reduced to the problem of solving a set of convex minimization problems. Therefore, the exact value of the attack impact can be obtained easily. We demonstrate how our modeling framework can be used for risk assessment on a numerical example."
]
} |
1905.03156 | 2944088136 | A design-centric modeling approach was proposed to model the behavior of the physical process controlled by an Industrial Control System (ICS) and study the cascading effects of data-oriented attacks. A threat model was used as input to guide the construction of the model where control components which are within the adversary's intent and capabilities are extracted. The relevant control components are subsequently modeled together with their control dependencies and operational design specifications. The approach was demonstrated and validated on a water treatment testbed. Attacks were simulated on the testbed model where its resilience to attacks was evaluated using proposed metrics such as Impact Ratio and Time-to-Critical-State. From the analysis of the attacks, design strengths and weaknesses were identified and design improvements were recommended to increase the testbed's resilience to attacks. | In @cite_15 , Orojloo and Azgomi proposed a modeling approach that considers the systems dynamics and control dependencies between the various components in the cyber-physical systems. The model built was used to perform sensitivity analysis to understand the system behaviour under various attack scenarios, providing insights to vulnerable control loops. The impact on system's physical parameters by attacks on specific components on the system was subsequently used to evaluate the component's criticality for successful attacks. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2509307846"
],
"abstract": [
"Abstract Estimating the possible impacts of security attacks on physical processes can help to rank the critical assets based on their sensitivity to performed attacks and predict their attractiveness from the attacker’s point of view. To address this challenge, this paper proposes a new method for assessing the direct and indirect impacts of attacks on cyber–physical systems (CPSs). The proposed method studies the dynamic behavior of systems in normal situation and under security attacks and evaluates the consequence propagation of attacks. The inputs to the model are control parameters including sensor readings and controller signals. The output of the model is evaluating the consequence propagation of attacks, ranking the important assets of systems based on their sensitivity to conducted attacks, and prioritizing the attacks based on their impacts on the behavior of system. The validation phase of the proposed method is performed by modeling and evaluating the consequence propagation of attacks against a boiling water power plant (BWPP)."
]
} |
1905.03156 | 2944088136 | A design-centric modeling approach was proposed to model the behavior of the physical process controlled by an Industrial Control System (ICS) and study the cascading effects of data-oriented attacks. A threat model was used as input to guide the construction of the model where control components which are within the adversary's intent and capabilities are extracted. The relevant control components are subsequently modeled together with their control dependencies and operational design specifications. The approach was demonstrated and validated on a water treatment testbed. Attacks were simulated on the testbed model where its resilience to attacks was evaluated using proposed metrics such as Impact Ratio and Time-to-Critical-State. From the analysis of the attacks, design strengths and weaknesses were identified and design improvements were recommended to increase the testbed's resilience to attacks. | Adepu and Mathur in @cite_5 studied the response of a water treatment plant to single point attacks. The attack propagation in terms of number of components in the system were affected were analyzed. System behavior such as changes in physical process metrics during attacks were investigated. The results of the study were used to propose attack detection mechanisms that were based on physical properties of the system. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2293378006"
],
"abstract": [
"An experimental investigation was undertaken to understand the impact of single-point cyber attacks on a Secure Water Treatment (SWaT) system. Cyber attacks were launched on SWaT through its SCADA server that connects to the Programmable Logic Controllers (PLCs) that in turn are connected to sensors and actuators. Attacks were designed to meet attacker objectives selected from a novel attacker model. Outcome of the experiments led to a better understanding of (a) the propagation of an attack across the system measured in terms of the number of components affected and (b) the behavior of the water treatment process in SWaT in response to the attacks. The observed response to various attacks was then used to propose attack detection mechanisms based on various physical properties measured during the treatment process."
]
} |
1905.03066 | 2944551804 | In this paper, we describe a strategy for training neural networks for object detection in range images obtained from one type of LiDAR sensor using labeled data from a different type of LiDAR sensor. Additionally, an efficient model for object detection in range images for use in self-driving cars is presented. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. Therefore, most research using neural networks for object detection from LiDAR point clouds is conducted on a very small number of publicly available datasets. Consequently, only a small number of sensor types are used. We use an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated using both simulated sensor data and data from an actual lower resolution sensor mounted to a research vehicle. It is shown that the model can detect objects from 360 range images in real time. | Scanning laser sensors have been in use in research vehicles for driver-assistance systems and self-driving cars for over two decades @cite_7 . Early object detection was based on clustering and additional post-processing @cite_17 @cite_25 . In recent years, there has been significant progress in object detection using LiDAR sensors due to the availability of higher resolution LiDAR sensors, publicly available datasets (especially KITTI), and the progress in deep learning. | {
"cite_N": [
"@cite_25",
"@cite_7",
"@cite_17"
],
"mid": [
"113755918",
"1576903416",
"2040472285"
],
"abstract": [
"",
"In this paper, the authors describe a road boundaries detection system that is based on rage data. A laser scanner mounted on a vehicle is used to measure distances and detect road boundaries. The range data is used to estimate the parameters of a model of the road boundaries. Kalman filtering is used to process successive scans. Test results indicate that the road boundaries can be reliably detected as long as guardrails or posts are available.",
"Abstract This submission is concerned with obstacle detection and tracking for an autonomous, unsupervised vehicle. A multisensor concept is proposed yielding a high level of reliability and security. It includes a variety of different sensor technologies with widely overlapping fields of view between the individual sensors. The major sensors for obstacle detection comprise a self-assessing vision sensor directed forwards and a laser scanner system surveying 360° around the vehicle. Preliminary results indicate the high reliability of the sensor system."
]
} |
1905.03066 | 2944551804 | In this paper, we describe a strategy for training neural networks for object detection in range images obtained from one type of LiDAR sensor using labeled data from a different type of LiDAR sensor. Additionally, an efficient model for object detection in range images for use in self-driving cars is presented. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. Therefore, most research using neural networks for object detection from LiDAR point clouds is conducted on a very small number of publicly available datasets. Consequently, only a small number of sensor types are used. We use an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated using both simulated sensor data and data from an actual lower resolution sensor mounted to a research vehicle. It is shown that the model can detect objects from 360 range images in real time. | Many of the neural networks used for detecting objects in LiDAR point clouds are based on ideas from convolutional neural networks (CNNs) that detect objects in 2D images. This includes both methods that create a dense grid of predictions such as @cite_6 and methods that output predictions for a previously generated set of region proposals, e. ,g. @cite_1 @cite_8 . An overview over CNN-based 2D object detection methods can be found in @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_6",
"@cite_8"
],
"mid": [
"2557728737",
"",
"2963037989",
"2613718673"
],
"abstract": [
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-toapples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [30], R-FCN [6] and SSD [25] systems, which we view as meta-architectures and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.",
"",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1905.03066 | 2944551804 | In this paper, we describe a strategy for training neural networks for object detection in range images obtained from one type of LiDAR sensor using labeled data from a different type of LiDAR sensor. Additionally, an efficient model for object detection in range images for use in self-driving cars is presented. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. Therefore, most research using neural networks for object detection from LiDAR point clouds is conducted on a very small number of publicly available datasets. Consequently, only a small number of sensor types are used. We use an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated using both simulated sensor data and data from an actual lower resolution sensor mounted to a research vehicle. It is shown that the model can detect objects from 360 range images in real time. | To process LiDAR data in neural networks, various representations of LiDAR point clouds have been used. One approach is to create a bird's eye view (BEV) image by projecting the LiDAR points onto a ground plane and encoding information such as reflectivity, height and density in channels of the input image of the network (see e. ,g. @cite_3 @cite_30 @cite_27 ). LiDAR data represented as range images has been used in CNNs to detect objects @cite_5 @cite_16 . As shown in @cite_4 , this representation can also be used together with other representations (including BEV) in a single network. Their evaluation suggests that using BEV can achieve a better detection performance than methods using the range image representation. One advantage of both BEV and range image representation is, that standard 2D convolutions and network architectures that are very similar to those used in 2D object detection can be used. This makes them relatively easy to implement and adapt. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_3",
"@cite_27",
"@cite_5",
"@cite_16"
],
"mid": [
"2963438049",
"2555618208",
"2766690012",
"2798965597",
"2415454270",
"2963120444"
],
"abstract": [
"Understanding driving situations regardless the conditions of the traffic scene is a cornerstone on the path towards autonomous vehicles; however, despite common sensor setups already include complementary devices such as LiDAR or radar, most of the research on perception systems has traditionally focused on computer vision. We present a LiDAR-based 3D object detection pipeline entailing three stages. First, laser information is projected into a novel cell encoding for bird's eye view projection. Later, both object location on the plane and its heading are estimated through a convolutional neural network originally designed for image processing. Finally, 3D oriented detections are computed in a post-processing phase. Experiments on KITTI dataset show that the proposed framework achieves state-of-the-art results among comparable methods. Further tests with different LiDAR sensors in real scenarios assess the multi-device capabilities of the approach.",
"This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.",
"For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9 for an intersection over union (IoU) value of 0.5. In addition, 75 of the detected cars are localized with an absolute positioning error of below 0.2m.",
"We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions. The input representation, network architecture, and model optimization are specially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.",
"Convolutional network techniques have recently achieved great success in vision based detection tasks. This paper introduces the recent development of our research on transplanting the fully convolutional network technique to the detection tasks on 3D range scan data. Specifically, the scenario is set as the vehicle detection task from the range data of Velodyne 64E lidar. We proposes to present the data in a 2D point map and use a single 2D end-to-end fully convolutional network to predict the objectness confidence and the bounding boxes simultaneously. By carefully design the bounding box encoding, it is able to predict full 3D bounding boxes even using a 2D convolutional network. Experiments on the KITTI dataset shows the state-of-the-art performance of the proposed method.",
""
]
} |
1905.03066 | 2944551804 | In this paper, we describe a strategy for training neural networks for object detection in range images obtained from one type of LiDAR sensor using labeled data from a different type of LiDAR sensor. Additionally, an efficient model for object detection in range images for use in self-driving cars is presented. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. Therefore, most research using neural networks for object detection from LiDAR point clouds is conducted on a very small number of publicly available datasets. Consequently, only a small number of sensor types are used. We use an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated using both simulated sensor data and data from an actual lower resolution sensor mounted to a research vehicle. It is shown that the model can detect objects from 360 range images in real time. | An alternative to approaches that use a two-dimensional representation of the point cloud are network structures that operate directly on three-dimensional data. @cite_11 uses a three-dimensional grid and 3D convolutions to predict objects. An architecture that can exploit the sparsity of data in three dimensional grids is presented in @cite_19 . In @cite_28 , a neural network that can operate directly on unstructured point clouds and is inherently invariant to permutations of the points in the point cloud has been proposed. Multiple ways to adopt this idea for automotive object detection have been presented @cite_29 @cite_14 @cite_10 . A comprehensive overview over various object detection methods for autonomous driving can be found in @cite_20 and @cite_9 . | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_19",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2964062501",
"2560609797",
"2963727135",
"2917580909",
"2963721253",
"2968296999",
"2911486422",
"2558294288"
],
"abstract": [
"In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.",
"Recent advancements in the perception for autonomous driving are driven by deep learning. In order to achieve the robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of \"what to fuse\", \"when to fuse\", and \"how to fuse\" remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets and the background information of object detection and semantic segmentation for the autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: this https URL.",
"This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L 1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L 1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40 while remaining highly competitive in terms of processing time.",
"",
"An autonomous vehicle (AV) requires an accurate perception of its surrounding environment to operate reliably. The perception system of an AV, which normally employs machine learning (e.g., deep learning), transforms sensory data into semantic information that enables autonomous driving. Object detection is a fundamental function of this perception system, which has been tackled by several works, most of them using 2D detection methods. However, the 2D methods do not provide depth information, which is required for driving tasks, such as path planning, collision avoidance, and so on. Alternatively, the 3D object detection methods introduce a third dimension that reveals more detailed object's size and location information. Nonetheless, the detection accuracy of such methods needs to be improved. To the best of our knowledge, this is the first survey on 3D object detection methods used for autonomous driving applications. This paper presents an overview of 3D object detection methods and prevalently used sensors and datasets in AVs. It then discusses and categorizes the recent works based on sensors modalities into monocular, point cloud-based, and fusion methods. We then summarize the results of the surveyed works and identify the research gaps and future research directions.",
"2D fully convolutional network has been recently successfully applied to the object detection problem on images. In this paper, we extend the fully convolutional network based detection techniques to 3D and apply it to point cloud data. The proposed approach is verified on the task of vehicle detection from lidar point cloud for autonomous driving. Experiments on the KITTI dataset shows significant performance improvement over the previous point cloud based detection approaches."
]
} |
1905.03066 | 2944551804 | In this paper, we describe a strategy for training neural networks for object detection in range images obtained from one type of LiDAR sensor using labeled data from a different type of LiDAR sensor. Additionally, an efficient model for object detection in range images for use in self-driving cars is presented. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. Therefore, most research using neural networks for object detection from LiDAR point clouds is conducted on a very small number of publicly available datasets. Consequently, only a small number of sensor types are used. We use an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated using both simulated sensor data and data from an actual lower resolution sensor mounted to a research vehicle. It is shown that the model can detect objects from 360 range images in real time. | The vast majority of these object detection algorithms is evaluated using data from the same type of sensor that was also used for training. There are however some exceptions to this. @cite_30 shows that their BEV-based network, which was trained on KITTI, can also predict objects using data from lower resolution lidar sensors. In @cite_24 , a CNN for range images from a VLP-16 sensor with only 16 channels is trained using a part of the KITTI LiDAR data. This CNN is used to perform a point-wise classification of the range image. Then, a clustering-based approach is used to extract objects. | {
"cite_N": [
"@cite_30",
"@cite_24"
],
"mid": [
"2963438049",
"2767597440"
],
"abstract": [
"Understanding driving situations regardless the conditions of the traffic scene is a cornerstone on the path towards autonomous vehicles; however, despite common sensor setups already include complementary devices such as LiDAR or radar, most of the research on perception systems has traditionally focused on computer vision. We present a LiDAR-based 3D object detection pipeline entailing three stages. First, laser information is projected into a novel cell encoding for bird's eye view projection. Later, both object location on the plane and its heading are estimated through a convolutional neural network originally designed for image processing. Finally, 3D oriented detections are computed in a post-processing phase. Experiments on KITTI dataset show that the proposed framework achieves state-of-the-art results among comparable methods. Further tests with different LiDAR sensors in real scenarios assess the multi-device capabilities of the approach.",
"Vehicle detection and tracking in real scenarios are key components to develop assisted and autonomous driving systems. Lidar sensors are specially suitable for this task, as they bring robustness to harsh weather conditions while providing accurate spatial information. However, the resolution provided by point cloud data is very scarce in comparison to camera images. In this work we explore the possibilities of Deep Learning (DL) methodologies applied to low resolution 3D lidar sensors such as the Velodyne VLP-16 (PUCK), in the context of vehicle detection and tracking. For this purpose we developed a lidar-based system that uses a Convolutional Neural Network (CNN), to perform point-wise vehicle detection using PUCK data, and Multi-Hypothesis Extended Kalman Filters (MH-EKF), to estimate the actual position and velocities of the detected vehicles. Comparative studies between the proposed lower resolution (VLP-16) tracking system and a high-end system, using Velodyne HDL-64, were carried out on the Kitti Tracking Benchmark dataset. Moreover, to analyze the influence of the CNN-based vehicle detection approach, comparisons were also performed with respect to the geometric-only detector. The results demonstrate that the proposed low resolution Deep Learning architecture is able to successfully accomplish the vehicle detection task, outperforming the geometric baseline approach. Moreover, it has been observed that our system achieves a similar tracking performance to the high-end HDL-64 sensor at close range. On the other hand, at long range, detection is limited to half the distance of the higher-end sensor."
]
} |
1905.02973 | 2956118678 | The detection of allusive text reuse is particularly challenging due to the sparse evidence on which allusive references rely---commonly based on none or very few shared words. Arguably, lexical semantics can be resorted to since uncovering semantic relations between words has the potential to increase the support underlying the allusion and alleviate the lexical sparsity. A further obstacle is the lack of evaluation benchmark corpora, largely due to the highly interpretative character of the annotation process. In the present paper, we aim to elucidate the feasibility of automated allusion detection. We approach the matter from an Information Retrieval perspective in which referencing texts act as queries and referenced texts as relevant documents to be retrieved, and estimate the difficulty of benchmark corpus compilation by a novel inter-annotator agreement study on query segmentation. Furthermore, we investigate to what extent the integration of lexical semantic information derived from distributional models and ontologies can aid retrieving cases of allusive reuse. The results show that (i) despite low agreement scores, using manual queries considerably improves retrieval performance with respect to a windowing approach, and that (ii) retrieval performance can be moderately boosted with distributional semantics. | Previous research on text reuse detection in literary texts has extensively explored methods such as n-gram matching @cite_10 and sequence alignment algorithms @cite_22 @cite_5 . In such approaches, fuzzier forms of intertextual links are accounted for through the use of edit distance comparisons or the inclusion of abstract linguistic information such as word lemmata or part-of-speech tags, and lexical semantic relationships extracted from WordNet. More recently, researchers have started to explore techniques from the field of distributional semantics in order to capture allusive text reuse. , for instance, have applied latent-semantic indexing (LSI) to find semantic connections and evaluated such method on a set of 35 allusive references to Vergil's in the first book of Lucan's . | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_22"
],
"mid": [
"",
"2039827492",
"2142943833"
],
"abstract": [
"",
"We describe here a method for automatically identifying word sense variation in a dated collection of historical books in a large digital library. By leveraging a small set of known translation book pairs to induce a bilingual sense inventory and labeled training data for a WSD classifier, we are able to automatically classify the Latin word senses in a 389 million word corpus and track the rise and fall of those senses over a span of two thousand years. We evaluate the performance of seven different classifiers both in a tenfold test on 83,892 words from the aligned parallel corpus and on a smaller, manually annotated sample of 525 words, measuring both the overall accuracy of each system and how well that accuracy correlates (via mean square error) to the observed historical variation.",
"We propose a computational model of text reuse tailored for ancient literary texts, available to us often only in small and noisy samples. The model takes into account source alternation patterns, so as to be able to align even sentences with low surface similarity. We demonstrate its ability to characterize text reuse in the Greek New Testament."
]
} |
1905.02973 | 2956118678 | The detection of allusive text reuse is particularly challenging due to the sparse evidence on which allusive references rely---commonly based on none or very few shared words. Arguably, lexical semantics can be resorted to since uncovering semantic relations between words has the potential to increase the support underlying the allusion and alleviate the lexical sparsity. A further obstacle is the lack of evaluation benchmark corpora, largely due to the highly interpretative character of the annotation process. In the present paper, we aim to elucidate the feasibility of automated allusion detection. We approach the matter from an Information Retrieval perspective in which referencing texts act as queries and referenced texts as relevant documents to be retrieved, and estimate the difficulty of benchmark corpus compilation by a novel inter-annotator agreement study on query segmentation. Furthermore, we investigate to what extent the integration of lexical semantic information derived from distributional models and ontologies can aid retrieving cases of allusive reuse. The results show that (i) despite low agreement scores, using manual queries considerably improves retrieval performance with respect to a windowing approach, and that (ii) retrieval performance can be moderately boosted with distributional semantics. | Previous research in the field of text reuse has also focused on the more specific problem of finding allusive references. One of the first studies @cite_4 looked at allusion detection in literary text using an IR approach exploiting textual features at a diversity of levels (including morphology and syntax) but collected only qualitative evidence on the efficiency of such approach. More ambitiously, approached the task of finding allusive references across texts in different languages using string alignment algorithms from machine translation. Besides the afore-mentioned work by , the work by is highly related to the present study, since the authors also worked on allusive reuse from the Bible in the works of Bernard. In their work, the authors focused on modeling text reuse patterns based on a set of transformation rules defined over string case, lemmata, POS tags and synset relationships: (syno- hypo- co-hypo-)nymy. More recently, conducted a quantitative comparison of such transformation rules with paraphrase detection methods on the task of predicting paraphrase relation between text pairs but do not evaluate the method in an IR setup. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1545953061"
],
"abstract": [
"We describe here a method for discovering imitative textual allusions in a large collection of Classical Latin poetry. In translating the logic of literary allusion into computational terms, we include not only traditional IR variables such as token similarity and ngrams, but also incorporate a comparison of syntactic structure as well. This provides a more robust search method for Classical languages since it accomodates their relatively free word order and rich inflection, and has the potential to improve fuzzy string searching in other languages as well."
]
} |
1905.03028 | 2952551358 | The emergence of real-time auction in online advertising has drawn huge attention of modeling the market competition, i.e., bid landscape forecasting. The problem is formulated as to forecast the probability distribution of market price for each ad auction. With the consideration of the censorship issue which is caused by the second-price auction mechanism, many researchers have devoted their efforts on bid landscape forecasting by incorporating survival analysis from medical research field. However, most existing solutions mainly focus on either counting-based statistics of the segmented sample clusters, or learning a parameterized model based on some heuristic assumptions of distribution forms. Moreover, they neither consider the sequential patterns of the feature over the price space. In order to capture more sophisticated yet flexible patterns at fine-grained level of the data, we propose a Deep Landscape Forecasting (DLF) model which combines deep learning for probability distribution forecasting and survival analysis for censorship handling. Specifically, we utilize a recurrent neural network to flexibly model the conditional winning probability w.r.t. each bid price. Then we conduct the bid landscape forecasting through probability chain rule with strict mathematical derivations. And, in an end-to-end manner, we optimize the model by minimizing two negative likelihood losses with comprehensive motivations. Without any specific assumption for the distribution form of bid landscape, our model shows great advantages over previous works on fitting various sophisticated market price distributions. In the experiments over two large-scale real-world datasets, our model significantly outperforms the state-of-the-art solutions under various metrics. | Bid Landscape Forecasting As is discussed in the above section, bid landscape forecasting has become an important component in RTB advertising and drawn much attention in recent works @cite_1 @cite_32 @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_32"
],
"mid": [
"2532780566",
"2534189265",
"2770888277"
],
"abstract": [
"We address the bidding strategy design problem faced by a Demand-Side Platform (DSP) in Real-Time Bidding (RTB) advertising. A RTB campaign consists of various parameters and usually a predefined budget. Under the budget constraint of a campaign, designing an optimal strategy for bidding on each impression to acquire as many clicks as possible is a main job of a DSP. State-of-the-art bidding algorithms rely on a single predictor, namely the clickthrough rate (CTR) predictor, to calculate the bidding value for each impression. This provides reasonable performance if the predictor has appropriate accuracy in predicting the probability of user clicking. However when the predictor gives only moderate accuracy, classical algorithms fail to capture optimal results. We improve the situation by accomplishing an additional winning price predictor in the bidding process. In this paper, a method combining powers of two prediction models is proposed, and experiments with real world RTB datasets from benchmarking the new algorithm with a classic CTR-only method are presented. The proposed algorithm performs better with regard to both number of clicks achieved and effective cost per click in many different settings of budget constraints.",
"Learning and predicting user responses, such as clicks and conversions, are crucial for many Internet-based businesses including web search, e-commerce, and online advertising. Typically, a user response model is established by optimizing the prediction accuracy, e.g., minimizing the error between the prediction and the ground truth user response. However, in many practical cases, predicting user responses is only part of a rather larger predictive or optimization task, where on one hand, the accuracy of a user response prediction determines the final (expected) utility to be optimized, but on the other hand, its learning may also be influenced from the follow-up stochastic process. It is, thus, of great interest to optimize the entire process as a whole rather than treat them independently or sequentially. In this paper, we take real-time display advertising as an example, where the predicted user's ad click-through rate (CTR) is employed to calculate a bid for an ad impression in the second price auction. We reformulate a common logistic regression CTR model by putting it back into its subsequent bidding context: rather than minimizing the prediction error, the model parameters are learned directly by optimizing campaign profit. The gradient update resulted from our formulations naturally fine-tunes the cases where the market competition is high, leading to a more cost-effective bidding. Our experiments demonstrate that, while maintaining comparable CTR prediction accuracy, our proposed user response learning leads to campaign profit gains as much as 78.2 for offline test and 25.5 for online A B test over strong baselines.",
"Real-time bidding (RTB) based display advertising has become one of the key technological advances in computational advertising. RTB enables advertisers to buy individual ad impressions via an auction in real-time and facilitates the evaluation and the bidding of individual impressions across multiple advertisers. In RTB, the advertisers face three main challenges when optimizing their bidding strategies, namely (i) estimating the utility (e.g., conversions, clicks) of the ad impression, (ii) forecasting the market value (thus the cost) of the given ad impression, and (iii) deciding the optimal bid for the given auction based on the first two. Previous solutions assume the first two are solved before addressing the bid optimization problem. However, these challenges are strongly correlated and dealing with any individual problem independently may not be globally optimal. In this paper, we propose Bidding Machine , a comprehensive learning to bid framework, which consists of three optimizers dealing with each challenge above, and as a whole, jointly optimizes these three parts. We show that such a joint optimization would largely increase the campaign effectiveness and the profit. From the learning perspective, we show that the bidding machine can be updated smoothly with both offline periodical batch or online sequential training schemes. Our extensive offline empirical study and online A B testing verify the high effectiveness of the proposed bidding machine."
]
} |
1905.03028 | 2952551358 | The emergence of real-time auction in online advertising has drawn huge attention of modeling the market competition, i.e., bid landscape forecasting. The problem is formulated as to forecast the probability distribution of market price for each ad auction. With the consideration of the censorship issue which is caused by the second-price auction mechanism, many researchers have devoted their efforts on bid landscape forecasting by incorporating survival analysis from medical research field. However, most existing solutions mainly focus on either counting-based statistics of the segmented sample clusters, or learning a parameterized model based on some heuristic assumptions of distribution forms. Moreover, they neither consider the sequential patterns of the feature over the price space. In order to capture more sophisticated yet flexible patterns at fine-grained level of the data, we propose a Deep Landscape Forecasting (DLF) model which combines deep learning for probability distribution forecasting and survival analysis for censorship handling. Specifically, we utilize a recurrent neural network to flexibly model the conditional winning probability w.r.t. each bid price. Then we conduct the bid landscape forecasting through probability chain rule with strict mathematical derivations. And, in an end-to-end manner, we optimize the model by minimizing two negative likelihood losses with comprehensive motivations. Without any specific assumption for the distribution form of bid landscape, our model shows great advantages over previous works on fitting various sophisticated market price distributions. In the experiments over two large-scale real-world datasets, our model significantly outperforms the state-of-the-art solutions under various metrics. | In the view of distribution modeling methods, there are two phases. In the early phase, researchers proposed several heuristic forms of functions to model the market price distribution. In @cite_7 @cite_32 @cite_1 , the authors provided some analytic forms of winning probability w.r.t. the bid price applied on the campaign level, which is based on the observation of the winning logs. Later in the recent researches, some well-studied distributions are applied in market price modeling. presented a log-normal distribution to model the market price ground truth. proposed a regression model based on Gaussian distribution to fit the market price. Recently, Gamma distribution for market price modeling has also been studied in the work @cite_18 . The main drawback of these distributional methods is that these restricted empirical preassumptions may lose the effectiveness of handling various dynamic data and they even ignore the sophisticated real data divergence as we show in Figure . | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_32",
"@cite_7"
],
"mid": [
"2783989358",
"2534189265",
"2770888277",
"2149822245"
],
"abstract": [
"In Real-Time Bidding (RTB) advertising, estimating the winning price is an important task in evaluating the bid cost of bid requests in Demand-Side Platforms (DSPs). The prior works utilize censored linear regression for winning price estimation by considering both winning and losing bid records. In the traditional regression models, the winning price of each bid request is based on Gaussian distribution. However, the property of Gaussian distribution is not suitable for the winning price of each bid request, and it is hard to link the physical meaning of Gaussian distribution and the winning price. Therefore, in this paper, based on our observation and analysis, the winning price of each bid request is modeled by a unique gamma distribution with respect to its features. Then we propose a gamma-based censored linear regression with regularization for winning price estimation. To derive the parameters of our proposed complicated model based on bid records, our approach is to divide this hard problem into two sub-problems, which are easier to solve. In practice, we also provide four heuristic initial parameter settings that are able to greatly reduce the computation cost when deriving the parameters. The experimental results demonstrate that our approach is highly effective for estimating the winning price compared with the state-of-the-art approaches in three real datasets.",
"Learning and predicting user responses, such as clicks and conversions, are crucial for many Internet-based businesses including web search, e-commerce, and online advertising. Typically, a user response model is established by optimizing the prediction accuracy, e.g., minimizing the error between the prediction and the ground truth user response. However, in many practical cases, predicting user responses is only part of a rather larger predictive or optimization task, where on one hand, the accuracy of a user response prediction determines the final (expected) utility to be optimized, but on the other hand, its learning may also be influenced from the follow-up stochastic process. It is, thus, of great interest to optimize the entire process as a whole rather than treat them independently or sequentially. In this paper, we take real-time display advertising as an example, where the predicted user's ad click-through rate (CTR) is employed to calculate a bid for an ad impression in the second price auction. We reformulate a common logistic regression CTR model by putting it back into its subsequent bidding context: rather than minimizing the prediction error, the model parameters are learned directly by optimizing campaign profit. The gradient update resulted from our formulations naturally fine-tunes the cases where the market competition is high, leading to a more cost-effective bidding. Our experiments demonstrate that, while maintaining comparable CTR prediction accuracy, our proposed user response learning leads to campaign profit gains as much as 78.2 for offline test and 25.5 for online A B test over strong baselines.",
"Real-time bidding (RTB) based display advertising has become one of the key technological advances in computational advertising. RTB enables advertisers to buy individual ad impressions via an auction in real-time and facilitates the evaluation and the bidding of individual impressions across multiple advertisers. In RTB, the advertisers face three main challenges when optimizing their bidding strategies, namely (i) estimating the utility (e.g., conversions, clicks) of the ad impression, (ii) forecasting the market value (thus the cost) of the given ad impression, and (iii) deciding the optimal bid for the given auction based on the first two. Previous solutions assume the first two are solved before addressing the bid optimization problem. However, these challenges are strongly correlated and dealing with any individual problem independently may not be globally optimal. In this paper, we propose Bidding Machine , a comprehensive learning to bid framework, which consists of three optimizers dealing with each challenge above, and as a whole, jointly optimizes these three parts. We show that such a joint optimization would largely increase the campaign effectiveness and the profit. From the learning perspective, we show that the bidding machine can be updated smoothly with both offline periodical batch or online sequential training schemes. Our extensive offline empirical study and online A B testing verify the high effectiveness of the proposed bidding machine.",
"In this paper we study bid optimisation for real-time bidding (RTB) based display advertising. RTB allows advertisers to bid on a display ad impression in real time when it is being generated. It goes beyond contextual advertising by motivating the bidding focused on user data and it is different from the sponsored search auction where the bid price is associated with keywords. For the demand side, a fundamental technical challenge is to automate the bidding process based on the budget, the campaign objective and various information gathered in runtime and in history. In this paper, the programmatic bidding is cast as a functional optimisation problem. Under certain dependency assumptions, we derive simple bidding functions that can be calculated in real time; our finding shows that the optimal bid has a non-linear relationship with the impression level evaluation such as the click-through rate and the conversion rate, which are estimated in real time from the impression level features. This is different from previous work that is mainly focused on a linear bidding function. Our mathematical derivation suggests that optimal bidding strategies should try to bid more impressions rather than focus on a small set of high valued impressions because according to the current RTB market data, compared to the higher evaluated impressions, the lower evaluated ones are more cost effective and the chances of winning them are relatively higher. Aside from the theoretical insights, offline experiments on a real dataset and online experiments on a production RTB system verify the effectiveness of our proposed optimal bidding strategies and the functional optimisation framework."
]
} |
1905.03028 | 2952551358 | The emergence of real-time auction in online advertising has drawn huge attention of modeling the market competition, i.e., bid landscape forecasting. The problem is formulated as to forecast the probability distribution of market price for each ad auction. With the consideration of the censorship issue which is caused by the second-price auction mechanism, many researchers have devoted their efforts on bid landscape forecasting by incorporating survival analysis from medical research field. However, most existing solutions mainly focus on either counting-based statistics of the segmented sample clusters, or learning a parameterized model based on some heuristic assumptions of distribution forms. Moreover, they neither consider the sequential patterns of the feature over the price space. In order to capture more sophisticated yet flexible patterns at fine-grained level of the data, we propose a Deep Landscape Forecasting (DLF) model which combines deep learning for probability distribution forecasting and survival analysis for censorship handling. Specifically, we utilize a recurrent neural network to flexibly model the conditional winning probability w.r.t. each bid price. Then we conduct the bid landscape forecasting through probability chain rule with strict mathematical derivations. And, in an end-to-end manner, we optimize the model by minimizing two negative likelihood losses with comprehensive motivations. Without any specific assumption for the distribution form of bid landscape, our model shows great advantages over previous works on fitting various sophisticated market price distributions. In the experiments over two large-scale real-world datasets, our model significantly outperforms the state-of-the-art solutions under various metrics. | Learning over Censored Data The data censorship is another challenge for bid landscape forecasting. In the online advertising field, many models based on survival analysis have been studied so far. proposed a censored regression model using the lost auction data to alleviate the data bias problem. Nevertheless, the Gaussian distribution or other distributional assumptions @cite_18 turn out to be too restricted while lacking of flexibility for modeling sophisticated yet practical distributions. Another problem is that these regression models @cite_34 @cite_15 @cite_18 can only provide a point estimation, i.e., the expectation of the market price without standard deviation, which fails to provide winning probability estimation given an arbitrary bid price to support the subsequent bidding decision @cite_32 . implemented Kaplan-Meier estimator @cite_26 for handling the data censorship in sponsored search. Kaplan-Meier estimator is a classic method in survival analysis which deals the right censored data in medical research @cite_35 @cite_23 . The authors of @cite_22 @cite_0 also utilized this non-parametric estimator to predict the winning probability. However, Kaplan-Meier estimator is merely statistically counting on the segmented data samples, thus fails to provide a fine-grained estimation, i.e., prediction on a single ad auction level. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_32",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_34"
],
"mid": [
"",
"2783989358",
"1979300931",
"2515050826",
"2770888277",
"2513944453",
"2496251701",
"2808766921",
"2073685064"
],
"abstract": [
"",
"In Real-Time Bidding (RTB) advertising, estimating the winning price is an important task in evaluating the bid cost of bid requests in Demand-Side Platforms (DSPs). The prior works utilize censored linear regression for winning price estimation by considering both winning and losing bid records. In the traditional regression models, the winning price of each bid request is based on Gaussian distribution. However, the property of Gaussian distribution is not suitable for the winning price of each bid request, and it is hard to link the physical meaning of Gaussian distribution and the winning price. Therefore, in this paper, based on our observation and analysis, the winning price of each bid request is modeled by a unique gamma distribution with respect to its features. Then we propose a gamma-based censored linear regression with regularization for winning price estimation. To derive the parameters of our proposed complicated model based on bid records, our approach is to divide this hard problem into two sub-problems, which are easier to solve. In practice, we also provide four heuristic initial parameter settings that are able to greatly reduce the computation cost when deriving the parameters. The experimental results demonstrate that our approach is highly effective for estimating the winning price compared with the state-of-the-art approaches in three real datasets.",
"Abstract In lifetesting, medical follow-up, and other fields the observation of the time of occurrence of the event of interest (called a death) may be prevented for some of the items of the sample by the previous occurrence of some other event (called a loss). Losses may be either accidental or controlled, the latter resulting from a decision to terminate certain observations. In either case it is usually assumed in this paper that the lifetime (age at death) is independent of the potential loss time; in practice this assumption deserves careful scrutiny. Despite the resulting incompleteness of the data, it is desired to estimate the proportion P(t) of items in the population whose lifetimes would exceed t (in the absence of such losses), without making any assumption about the form of the function P(t). The observation for each item of a suitable initial event, marking the beginning of its lifetime, is presupposed. For random samples of size N the product-limit (PL) estimate can be defined as follows: L...",
"Real-time auction has become an important online advertising trading mechanism. A crucial issue for advertisers is to model the market competition, i.e., bid landscape forecasting. It is formulated as predicting the market price distribution for each ad auction provided by its side information. Existing solutions mainly focus on parameterized heuristic forms of the market price distribution and learn the parameters to fit the data. In this paper, we present a functional bid landscape forecasting method to automatically learn the function mapping from each ad auction features to the market price distribution without any assumption about the functional form. Specifically, to deal with the categorical feature input, we propose a novel decision tree model with a node splitting scheme by attribute value clustering. Furthermore, to deal with the problem of right-censored market price observations, we propose to incorporate a survival model into tree learning and prediction, which largely reduces the model bias. The experiments on real-world data demonstrate that our models achieve substantial performance gains over previous work in various metrics. The software related to this paper is available at https: github.com zeromike bid-lands.",
"Real-time bidding (RTB) based display advertising has become one of the key technological advances in computational advertising. RTB enables advertisers to buy individual ad impressions via an auction in real-time and facilitates the evaluation and the bidding of individual impressions across multiple advertisers. In RTB, the advertisers face three main challenges when optimizing their bidding strategies, namely (i) estimating the utility (e.g., conversions, clicks) of the ad impression, (ii) forecasting the market value (thus the cost) of the given ad impression, and (iii) deciding the optimal bid for the given auction based on the first two. Previous solutions assume the first two are solved before addressing the bid optimization problem. However, these challenges are strongly correlated and dealing with any individual problem independently may not be globally optimal. In this paper, we propose Bidding Machine , a comprehensive learning to bid framework, which consists of three optimizers dealing with each challenge above, and as a whole, jointly optimizes these three parts. We show that such a joint optimization would largely increase the campaign effectiveness and the profit. From the learning perspective, we show that the bidding machine can be updated smoothly with both offline periodical batch or online sequential training schemes. Our extensive offline empirical study and online A B testing verify the high effectiveness of the proposed bidding machine.",
"In real-time display advertising, ad slots are sold per impression via an auction mechanism. For an advertiser, the campaign information is incomplete --- the user responses (e.g, clicks or conversions) and the market price of each ad impression are observed only if the advertiser's bid had won the corresponding ad auction. The predictions, such as bid landscape forecasting, click-through rate (CTR) estimation, and bid optimisation, are all operated in the pre-bid stage with full-volume bid request data. However, the training data is gathered in the post-bid stage with a strong bias towards the winning impressions. A common solution for learning over such censored data is to reweight data instances to correct the discrepancy between training and prediction. However, little study has been done on how to obtain the weights independent of previous bidding strategies and consequently integrate them into the final CTR prediction and bid generation steps. In this paper, we formulate CTR estimation and bid optimisation under such censored auction data. Derived from a survival model, we show that historic bid information is naturally incorporated to produce Bid-aware Gradient Descents (BGD) which controls both the importance and the direction of the gradient to achieve unbiased learning. The empirical study based on two large-scale real-world datasets demonstrates remarkable performance gains from our solution. The learning framework has been deployed on Yahoo!'s real-time bidding platform and provided 2.97 AUC lift for CTR estimation and 9.30 eCPC drop for bid optimisation in an online A B test.",
"The electronic health record (EHR) provides an unprecedented opportunity to build actionable tools to support physicians at the point of care. In this paper, we investigate survival analysis in the context of EHR data. We introduce deep survival analysis, a hierarchical generative approach to survival analysis. It departs from previous approaches in two primary ways: (1) all observations, including covariates, are modeled jointly conditioned on a rich latent structure; and (2) the observations are aligned by their failure time, rather than by an arbitrary time zero as in traditional survival analysis. Further, it (3) scalably handles heterogeneous (continuous and discrete) data types that occur in the EHR. We validate deep survival analysis model by stratifying patients according to risk of developing coronary heart disease (CHD). Specifically, we study a dataset of 313,000 patients corresponding to 5.5 million months of observations. When compared to the clinically validated Framingham CHD risk score, deep survival analysis is significantly superior in stratifying patients according to their risk.",
"We generalize the winning price model to incorporate the deep learning models with different distributions and propose an algorithm to learn from the historical bidding information, where the winning price are either observed or partially observed. We study if the successful deep learning models of the click-through rate can enhance the prediction of the winning price or not. We also study how different distributions of winning price can affect the learning results. Experiment results show that the deep learning models indeed boost the prediction quality when they are learned on the historical observed data. In addition, the deep learning models on the unobserved data are improved after learning from the censored data. The main advantage of the proposed generalized deep learning model is to provide more flexibility to model the winning price and improve the performance in consideration of the possibly various winning price distributions and various model structures in practice.",
"In the aspect of a Demand-Side Platform (DSP), which is the agent of advertisers, we study how to predict the winning price such that the DSP can win the bid by placing a proper bidding value in the real-time bidding (RTB) auction. We propose to leverage the machine learning and statistical methods to train the winning price model from the bidding history. A major challenge is that a DSP usually suffers from the censoring of the winning price, especially for those lost bids in the past. To solve it, we utilize the censored regression model, which is widely used in the survival analysis and econometrics, to fit the censored bidding data. Note, however, the assumption of censored regression does not hold on the real RTB data. As a result, we further propose a mixture model, which combines linear regression on bids with observable winning prices and censored regression on bids with the censored winning prices, weighted by the winning rate of the DSP. Experiment results show that the proposed mixture model in general prominently outperforms linear regression in terms of the prediction accuracy."
]
} |
1905.03028 | 2952551358 | The emergence of real-time auction in online advertising has drawn huge attention of modeling the market competition, i.e., bid landscape forecasting. The problem is formulated as to forecast the probability distribution of market price for each ad auction. With the consideration of the censorship issue which is caused by the second-price auction mechanism, many researchers have devoted their efforts on bid landscape forecasting by incorporating survival analysis from medical research field. However, most existing solutions mainly focus on either counting-based statistics of the segmented sample clusters, or learning a parameterized model based on some heuristic assumptions of distribution forms. Moreover, they neither consider the sequential patterns of the feature over the price space. In order to capture more sophisticated yet flexible patterns at fine-grained level of the data, we propose a Deep Landscape Forecasting (DLF) model which combines deep learning for probability distribution forecasting and survival analysis for censorship handling. Specifically, we utilize a recurrent neural network to flexibly model the conditional winning probability w.r.t. each bid price. Then we conduct the bid landscape forecasting through probability chain rule with strict mathematical derivations. And, in an end-to-end manner, we optimize the model by minimizing two negative likelihood losses with comprehensive motivations. Without any specific assumption for the distribution form of bid landscape, our model shows great advantages over previous works on fitting various sophisticated market price distributions. In the experiments over two large-scale real-world datasets, our model significantly outperforms the state-of-the-art solutions under various metrics. | Another school of survival analysis methods is Cox proportional hazard model @cite_4 . This method commonly assumes that the instant hazard rate of event (i.e., auction winning in our case) occurrence is based on a base distribution multiplied by an exponential tuning factor. Recent works including @cite_2 @cite_42 @cite_20 all used the Cox model with predefined base function to model the hazard rate of each sample, such as Weibull distribution, log-normal distribution or log-logistic distribution @cite_13 . However, the problem is that the strong assumption of the data distribution may result in poor generalization in real-world data. | {
"cite_N": [
"@cite_4",
"@cite_42",
"@cite_2",
"@cite_13",
"@cite_20"
],
"mid": [
"1580788756",
"2618421739",
"",
"2171515720",
"2101095383"
],
"abstract": [
"The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.",
"An accurate model of patient-specific kidney graft survival distributions can help to improve shared-decision making in the treatment and care of patients. In this paper, we propose a deep learning method that directly models the survival function instead of estimating the hazard function to predict survival times for graft patients based on the principle of multi-task learning. By learning to jointly predict the time of the event, and its rank in the cox partial log likelihood framework, our deep learning approach outperforms, in terms of survival time prediction quality and concordance index, other common methods for survival analysis, including the Cox Proportional Hazards model and a network trained on the cox partial log-likelihood.",
"",
"Praise for the Third Edition. . . an easy-to read introduction to survival analysis which covers the major concepts and techniques of the subject. Statistics in Medical ResearchUpdated and expanded to reflect the latest developments, Statistical Methods for Survival Data Analysis, Fourth Edition continues to deliver a comprehensive introduction to the most commonly-used methods for analyzing survival data. Authored by a uniquely well-qualified author team, the Fourth Edition is a critically acclaimed guide to statistical methods with applications in clinical trials, epidemiology, areas of business, and the social sciences. The book features many real-world examples to illustrate applications within these various fields, although special consideration is given to the study of survival data in biomedical sciences.Emphasizing the latest research and providing the most up-to-date information regarding software applications in the field, Statistical Methods for Survival Data Analysis, Fourth Edition also includes:Marginal and random effect models for analyzing correlated censored or uncensored dataMultiple types of two-sample and K-sample comparison analysisUpdated treatment of parametric methods for regression model fitting with a new focus on accelerated failure time modelsExpanded coverage of the Cox proportional hazards modelExercises at the end of each chapter to deepen knowledge of the presented materialStatistical Methods for Survival Data Analysis is an ideal text for upper-undergraduate and graduate-level courses on survival data analysis. The book is also an excellent resource for biomedical investigators, statisticians, and epidemiologists, as well as researchers in every field in which the analysis of survival data plays a role.",
"Summary. We introduce a path following algorithm for L1-regularized generalized linear models. The L1-regularization procedure is useful especially because it, in effect, selects variables according to the amount of penalization on the L1-norm of the coefficients, in a manner that is less greedy than forward selection–backward deletion. The generalized linear model path algorithm efficiently computes solutions along the entire regularization path by using the predictor–corrector method of convex optimization. Selecting the step length of the regularization parameter is critical in controlling the overall accuracy of the paths; we suggest intuitive and flexible strategies for choosing appropriate values. We demonstrate the implementation with several simulated and real data sets."
]
} |
1905.03008 | 2966519690 | We show that the 2-dimensional Weisfeiler-Leman algorithm stabilizes n-vertex graphs after at most @math iterations. This implies that if such graphs are distinguishable in 3-variable first order logic with counting, then they can also be distinguished in this logic by a formula of quantifier depth at most @math . For this we exploit a new refinement based on counting walks and argue that its iteration number differs from the classic Weisfeiler-Leman refinement by at most a logarithmic factor. We then prove matching linear upper and lower bounds on the number of iterations of the walk refinement. This is achieved with an algebraic approach by exploiting properties of semisimple matrix algebras. We also define a walk logic and a bijective walk pebble game that precisely correspond to the new walk refinement. | Babai employs the @math -dimensional WL algorithm, with @math logarithmic in the input, as a subroutine in his quasi-polynomial time algorithm for graph isomorphism testing @cite_5 . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2409645877"
],
"abstract": [
"We show that the Graph Isomorphism (GI) problem and the more general problems of String Isomorphism (SI) andCoset Intersection (CI) can be solved in quasipolynomial(exp((logn)O(1))) time. The best previous bound for GI was exp(O( √n log n)), where n is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, exp(O (√ n)), where n is the size of the permutation domain (Babai, 1983). Following the approach of Luks’s seminal 1980 82 paper, the problem we actually address is SI. This problem takes two strings of length n and a permutation group G of degree n (the “ambient group”) as input (G is given by a list of generators) and asks whether or not one of the strings can be transformed into the other by some element of G. Luks’s divide-and-conquer algorithm for SI proceeds by recursion on the ambient group. We build on Luks’s framework and attack the obstructions to efficient Luks recurrence via an interplay between local and global symmetry. We construct group theoretic “local certificates” to certify the presence or absence of local symmetry, aggregate the negative certificates to canonical k-ary relations where k = O(log n), and employ combinatorial canonical partitioning techniques to split the k-ary relational structure for efficient divide-and- conquer. We show that in a well–defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. The central element of the algorithm is the “local certificates” routine which is based on a new group theoretic result, the “Unaffected stabilizers lemma,” that allows us to construct global automorphisms out of local information."
]
} |
1905.03008 | 2966519690 | We show that the 2-dimensional Weisfeiler-Leman algorithm stabilizes n-vertex graphs after at most @math iterations. This implies that if such graphs are distinguishable in 3-variable first order logic with counting, then they can also be distinguished in this logic by a formula of quantifier depth at most @math . For this we exploit a new refinement based on counting walks and argue that its iteration number differs from the classic Weisfeiler-Leman refinement by at most a logarithmic factor. We then prove matching linear upper and lower bounds on the number of iterations of the walk refinement. This is achieved with an algebraic approach by exploiting properties of semisimple matrix algebras. We also define a walk logic and a bijective walk pebble game that precisely correspond to the new walk refinement. | Regarding bounds, Berkholz and Nordstr " o m @cite_10 proved a lower bound on the number of iterations of the @math -dimensional WL algorithm for finite structures. Specifically, they show for sufficiently large @math the existence of @math -element relational structures distinguished by the @math -dimensional WL algorithm but for which @math iterations do not suffice. For a different logic, namely the @math -variable existential negation-free fragment of first-order logic, Berkholz also developed techniques to prove tight bounds @cite_14 . In contrast to these bounds, Fürer's lower bound @cite_4 of @math mentioned above is applicable to graphs and in fact also applies to all fixed dimensions @math . | {
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_4"
],
"mid": [
"",
"2963494352",
"2152035036"
],
"abstract": [
"",
"We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nΩ(k log k). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler–Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov ’16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the quantifier depth required to distinguish them.Categories and Subject Descriptors F.4.1 [Mathematical Logic]: Computational Logic, Model theory; F.2.3 [Tradeoffs between Complexity Measures]",
"We consider the problem of finding a characterization for polynomial time computable queries on finite structures in terms of logical definability. It is well known that fixpoint logic provides such a characterization in the presence of a built-in linear order, but without linear order even very simple polynomial time queries involving counting are not expressible in fixpoint logic. Our approach to the problem is based on generalized quantifiers. A generalized quantifier isn-ary if it binds any number of formulas, but at mostnvariables in each formula. We prove that, for each natural numbern, there is a query on finite structures which is expressible in fixpoint logic, but not in the extension of first-order logic by any set ofn-ary quantifiers. It follows that the expressive power of fixpoint logic cannot be captured by adding finitely many quantifiers to first-order logic. Furthermore, we prove that, for each natural numbern, there is a polynomial time computable query which is not definable in any extension of fixpoint logic byn-ary quantifiers. In particular, this rules out the possibility of characterizing PTIME in terms of definability in fixpoint logic extended by a finite set of generalized quantifiers."
]
} |
1905.02878 | 2952315335 | Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods. | By explicitly expressing the structural connections between words and phrases, syntax trees been demonstrated helpful in SMT @cite_42 @cite_19 @cite_11 @cite_37 @cite_38 @cite_18 . Although the representative Seq2Seq NMT models are able to capture latent long-distance relations by using neural network structures such GRU and LSTM @cite_16 @cite_0 , recent studies show that explicitly integrating syntax trees into NMT models can bring further gains @cite_35 @cite_3 @cite_20 @cite_23 @cite_1 . Under the NMT setting, the exploration of syntax trees could be more flexible, because of the strong capabilities of neural network in representing arbitrary structures. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_18",
"@cite_35",
"@cite_42",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2250974141",
"2095690342",
"",
"2410082850",
"",
"2609011624",
"2563574619",
"2525778437",
"2144002870",
"2739894144",
"2949888546",
"2611029360",
"2150378737"
],
"abstract": [
"Incorporating semantic structure into a linguistics-free translation model is challenging, since semantic structures are closely tied to syntax. In this paper, we propose a two-level approach to exploiting predicate-argument structure reordering in a hierarchical phrase-based translation model. First, we introduce linguistically motivated constraints into a hierarchical model, guiding translation phrase choices in favor of those that respect syntactic boundaries. Second, based on such translation phrases, we propose a predicate-argument structure reordering model that predicts reordering not only between an argument and its predicate, but also between two arguments. Experiments on Chinese-to-English translation demonstrate that both advances significantly improve translation accuracy.",
"Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations to complete translations, which make it difficult to specify ordering information in translation rules. In our model of this paper, we handle this problem by directly specifying the ordering information in head-dependents rules which represent the source side as head-dependents relations and the target side as strings. The head-dependents rules require only substitution operation, thus our model requires no heuristics or separate ordering models of the previous works to control the word order of translations. Large-scale experiments show that our model performs well on long distance reordering, and outperforms the state-of-the-art constituency-to-string model (+1.47 BLEU on average) and hierarchical phrase-based model (+0.46 BLEU on average) on two Chinese-English NIST test sets without resort to phrases or parse forest. For the first time, a source dependency structure based model catches up with and surpasses the state-of-the-art translation models.",
"",
"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.",
"",
"We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. An experiment on the WMT16 German-English news translation task resulted in an improved BLEU score when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A small-scale human evaluation also showed an advantage to the syntax-aware system.",
"",
"Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60 compared to Google's phrase-based production system.",
"This paper proposes a statistical, tree-to-tree model for producing translations. Two main contributions are as follows: (1) a method for the extraction of syntactic structures with alignment information from a parallel corpus of translations, and (2) use of a discriminative, feature-based model for prediction of these target-language syntactic structures---which we call aligned extended projections, or AEPs. An evaluation of the method on translation from German to English shows similar performance to the phrase-based model of (2003).",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"In typical neural machine translation (NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN. In this paper, we propose a new type of decoder for NMT, which splits the decode state into two parts and updates them in two different time-scales. Specifically, we first predict a chunk time-scale state for phrasal modeling, on top of which multiple word time-scale states are generated. In this way, the target sentence is translated hierarchically from chunks to words, with information in different granularities being leveraged. Experiments show that our proposed model significantly improves the translation performance over the state-of-the-art NMT model.",
"In adding syntax to statistical MT, there is a tradeoff between taking advantage of linguistic analysis, versus allowing the model to exploit linguistically unmotivated mappings learned from parallel training data. A number of previous efforts have tackled this tradeoff by starting with a commitment to linguistically motivated analyses and then nding appropriate ways to soften that commitment. We present an approach that explores the tradeoff from the other direction, starting with a context-free translation model learned directly from aligned parallel text, and then adding soft constituent-level constraints based on parses of the source language. We obtain substantial improvements in performance for translation from Chinese and Arabic to English."
]
} |
1905.02878 | 2952315335 | Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods. | Recursive neural networks based on LSTM or GRU have been one natural method to model syntax trees @cite_14 @cite_43 @cite_22 @cite_24 @cite_21 @cite_34 @cite_41 , which are capable of representing the entire trees globally. present the first work to apply a bottom-up Tree-LSTM for NMT. The major drawback is that its bottom-up composing strategy is insufficient for bottom nodes. Thus bi-directional extensions have been suggested @cite_6 @cite_28 . Since Tree-RNN suffers serious inefficiency problem, suggest a Tree-Linearization alternative, which converts constituent trees into a sequence of symbols mixed with words and syntactic tags. The method is as effective as Tree-RNN approaches yet more effective. Noticeably, all these studies focus on constituent trees. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_41",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_43",
"@cite_34"
],
"mid": [
"1879966306",
"2953391617",
"2579166072",
"2737709597",
"2549259847",
"2963888305",
"2259512711",
"2104246439",
"2229639163"
],
"abstract": [
"The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.",
"Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. But there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper we benchmark recursive neural models against sequential recurrent neural models (simple recurrent and LSTM models), enforcing apples-to-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answer-phrases; (3) discourse parsing; (4) semantic relation extraction (e.g., component-whole between nouns). Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require associating headwords across a long distance, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.",
"We introduce a tree-structured attention neural network for sentences and small phrases and apply it to the problem of sentiment classification. Our model expands the current recursive models by incorporating structural information around a node of a syntactic tree using both bottom-up and top-down information propagation. Also, the model utilizes structural attention to identify the most salient representations during the construction of the syntactic tree. To our knowledge, the proposed models achieve state of the art performance on the Stanford Sentiment Treebank dataset.",
"This paper proposes a hierarchical attentional neural translation model which focuses on enhancing source-side hierarchical representations by covering both local and global semantic information using a bidirectional tree-based encoder. To maximize the predictive likelihood of target words, a weighted variant of an attention mechanism is used to balance the attentive information between lexical and phrase vectors. Using a tree-based rare word encoding, the proposed model is extended to sub-word level to alleviate the out-of-vocabulary (OOV) problem. Empirical results reveal that the proposed model significantly outperforms sequence-to-sequence attention-based and tree-based neural translation models in English-Chinese translation tasks.",
"Sequential LSTM has been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain reference to input words for each node. In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node. In addition, enabled by head lexicalization, we build a tree LSTM in the top-down direction, which corresponds to bidirectional sequential LSTM structurally. Experiments show that both extensions give better representations of tree structures. Our final model gives the best results on the Standford Sentiment Treebank and highly competitive results on the TREC question type classification task.",
"",
"Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks. In this paper we develop Tree Long Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is designed to predict a tree rather than a linear sequence. TreeLSTM defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated sub-tree. We further enhance the modeling power of TreeLSTM by explicitly representing the correlations between left and right dependents. Application of our model to the MSR sentence completion challenge achieves results beyond the current state of the art. We also report results on dependency parsing reranking achieving competitive performance.",
"Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).",
"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1 and 5.7 relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components."
]
} |
1905.02648 | 2944403400 | In this paper, we present a framework for performing collaborative localization for groups of micro aerial vehicles (MAV) that use vision based sensing. The vehicles are each assumed to be equipped with a forward-facing monocular camera, and to be capable of communicating with each other. This collaborative localization approach is developed as a decentralized algorithm and built in a distributed fashion where individual and relative pose estimation techniques are combined for the group to localize against surrounding environments. The MAVs initially detect and match salient features between each other to create a sparse reconstruction of the observed environment, which acts as a global map. Once a map is available, each MAV performs feature detection and tracking with a robust outlier rejection process to estimate its own pose in 6 degrees of freedom. Occasionally, one or more MAVs can be tasked to compute poses for another MAV through relative measurements, which is achieved through multiple view geometry concepts. These relative measurements are then fused with individual measurements in a consistent fashion. We present the results of the algorithm on image data from MAV flights both in simulation and real life, and discuss the advantages of collaborative localization in improving pose estimation accuracy. | Vision based localization has been studied extensively in the literature. Initially, it was achieved through external camera placement such as in professional motion capture systems @cite_35 @cite_8 . When vision sensors were used as onboard exteroceptive sensors, RGBD sensors were one of the initially investigated setups. Microsoft Kinect sensors were used for altitude estimation @cite_29 , in tandem with a 2D laser rangefinder for mapping and localization @cite_9 and visual odometry @cite_6 . Even more recently, full six degree-of-freedom localization was demonstrated using RGBD sensors @cite_14 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_8",
"@cite_29",
"@cite_9",
"@cite_6"
],
"mid": [
"",
"1547236731",
"2065894019",
"2160040544",
"1996985406",
"2123432765"
],
"abstract": [
"",
"Real-time and reliable localization is a prerequisite for autonomously performing high-level tasks with micro aerial vehicles(MAVs). Nowadays, most existing methods use vision system for 6DoF pose estimation, which can not work in degraded visual environments. This paper presents an onboard 6DoF pose estimation method for an indoor MAV in challenging GPS-denied degraded visual environments by using a RGB-D camera. In our system, depth images are mainly used for odometry estimation and localization. First, a fast and robust relative pose estimation (6DoF Odometry) method is proposed, which uses the range rate constraint equation and photometric error metric to get the frame-to-frame transform. Then, an absolute pose estimation (6DoF Localization) method is proposed to locate the MAV in a given 3D global map by using a particle filter. The whole localization system can run in real-time on an embedded computer with low CPU usage. We demonstrate the effectiveness of our system in extensive real environments on a customized MAV platform. The experimental results show that our localization system can robustly and accurately locate the robot in various practical challenging environments.",
"In the last five years, advances in materials, electronics, sensors, and batteries have fueled a growth in the development of microunmanned aerial vehicles (MAVs) that are between 0.1 and 0.5 m in length and 0.1-0.5 kg in mass [1]. A few groups have built and analyzed MAVs in the 10-cm range [2], [3]. One of the smallest MAV is the Picoftyer with a 60-mmpropellor diameter and a mass of 3.3 g [4]. Platforms in the 50-cm range are more prevalent with several groups having built and flown systems of this size [5]-[7]. In fact, there are severalcommercially available radiocontrolled (PvC) helicopters and research-grade helicopters in this size range [8].",
"Reliable depth estimation is a cornerstone of many autonomous robotic control systems. The Microsoft Kinect is a new, low cost, commodity game controller peripheral that calculates a depth map of the environment with good accuracy and high rate. In this paper we calibrate the Kinect depth and image sensors and then use the depth map to control the altitude of a quadrotor helicopter. This paper presents the first results of using this sensor in a real-time robotics control application.",
"In this paper, we propose a stochastic differential equation-based exploration algorithm to enable exploration in three-dimensional indoor environments with a payload constrained micro-aerial vehicle (MAV). We are able to address computation, memory, and sensor limitations by considering only the known occupied space in the current map. We determine regions for further exploration based on the evolution of a stochastic differential equation that simulates the expansion of a system of particles with Newtonian dynamics. The regions of most significant particle expansion correlate to unexplored space. After identifying and processing these regions, the autonomous MAV navigates to these locations to enable fully autonomous exploration. The performance of the approach is demonstrated through numerical simulations and experimental results in single and multi-floor indoor experiments.",
"RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations."
]
} |
1905.02648 | 2944403400 | In this paper, we present a framework for performing collaborative localization for groups of micro aerial vehicles (MAV) that use vision based sensing. The vehicles are each assumed to be equipped with a forward-facing monocular camera, and to be capable of communicating with each other. This collaborative localization approach is developed as a decentralized algorithm and built in a distributed fashion where individual and relative pose estimation techniques are combined for the group to localize against surrounding environments. The MAVs initially detect and match salient features between each other to create a sparse reconstruction of the observed environment, which acts as a global map. Once a map is available, each MAV performs feature detection and tracking with a robust outlier rejection process to estimate its own pose in 6 degrees of freedom. Occasionally, one or more MAVs can be tasked to compute poses for another MAV through relative measurements, which is achieved through multiple view geometry concepts. These relative measurements are then fused with individual measurements in a consistent fashion. We present the results of the algorithm on image data from MAV flights both in simulation and real life, and discuss the advantages of collaborative localization in improving pose estimation accuracy. | The ubiquity and compactness of monocular cameras have had a significant influence on their popularity and applicability for estimation in both computer vision and robotics communities. Many monocular camera based localization and mapping methods have been developed over the last decade such as PTAM @cite_32 , SVO @cite_19 , ORB-SLAM2 @cite_20 and LSD-SLAM @cite_33 , among which some were successfully implemented on UAV platforms. When applying these techniques onboard MAVs, a specific focus lies on removing the scale ambiguity. Various algorithms were proposed in the last few years that try to remove scale ambiguity either by fusing vision data with an IMU @cite_3 , using ultrasonic rangefinders in conjunction with optical flow such as in the commercial autopilot PIXHAWK @cite_22 , and most recently, estimating depth in a probabilistic yet computationally intensive fashion @cite_7 . Many other promising monocular visual-inertial systems have been proposed, such as MSCKF @cite_34 , visual-inertial ORB-SLAM2 @cite_39 , VINS-MONO @cite_10 etc. | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_7",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"612478963",
"",
"2093659073",
"2151290401",
"2055904838",
"",
"1970504153",
"",
"2745859992",
"2535547924"
],
"abstract": [
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"",
"In this paper, we solve the problem of estimating dense and accurate depth maps from a single moving camera. A probabilistic depth measurement is carried out in real time on a per-pixel basis and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress. Our contribution is a novel approach to depth map computation that combines Bayesian estimation and recent development on convex optimization for image processing. We demonstrate that our method outperforms state-of-the-art techniques in terms of accuracy, while exhibiting high efficiency in memory usage and computing power. We call our approach REMODE (REgularized MOnocular Depth Estimation) and the CUDA-based implementation runs at 30Hz on a laptop computer.",
"This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.",
"We describe a model to estimate motion from monocular visual and inertial measurements. We analyze the model and characterize the conditions under which its state is observable, and its parameters are identifiable. These include the unknown gravity vector, and the unknown transformation between the camera coordinate frame and the inertial unit. We show that it is possible to estimate both state and parameters as part of an on-line procedure, but only provided that the motion sequence is â??rich enoughâ??, a condition that we characterize explicitly. We then describe an efficient implementation of a filter to estimate the state and parameters of this model, including gravity and camera-to-inertial calibration. It runs in real-time on an embedded platform. We report experiments of continuous operation, without failures, re-initialization, or re-calibration, on paths of length up to 30 km. We also describe an integrated approach to â??loop-closureâ??, that is the recognition of previously seen locations and the topological re-adjustment of the traveled path. It represents visual features relative to the global orientation reference provided by the gravity vector estimated by the filter, and relative to the scale provided by their known position within the map; these features are organized into â??locationsâ?? defined by visibility constraints, represented in a topological graph, where loop-closure can be performed without the need to re-compute past trajectories or perform bundle adjustment. The software infrastructure as well as the embedded platform is described in detail in a previous technical report.",
"",
"We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.",
"",
"One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https: github.com HKUST-Aerial-Robotics VINS-Mono ) and iOS mobile devices ( https: github.com HKUST-Aerial-Robotics VINS-Mobile ).",
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields."
]
} |
1905.02648 | 2944403400 | In this paper, we present a framework for performing collaborative localization for groups of micro aerial vehicles (MAV) that use vision based sensing. The vehicles are each assumed to be equipped with a forward-facing monocular camera, and to be capable of communicating with each other. This collaborative localization approach is developed as a decentralized algorithm and built in a distributed fashion where individual and relative pose estimation techniques are combined for the group to localize against surrounding environments. The MAVs initially detect and match salient features between each other to create a sparse reconstruction of the observed environment, which acts as a global map. Once a map is available, each MAV performs feature detection and tracking with a robust outlier rejection process to estimate its own pose in 6 degrees of freedom. Occasionally, one or more MAVs can be tasked to compute poses for another MAV through relative measurements, which is achieved through multiple view geometry concepts. These relative measurements are then fused with individual measurements in a consistent fashion. We present the results of the algorithm on image data from MAV flights both in simulation and real life, and discuss the advantages of collaborative localization in improving pose estimation accuracy. | Collaborative localization has been of interest over the past decade as well, with the theoretical foundations having been studied extensively. The general idea of collaborative localization has involved the concept of fusing different measurements to result in a more accurate fused state. @cite_37 present a localization approach that uses an extended Kalman filter to fuse proprioceptive and exteroceptive measurements, applied to multi-robot localization. @cite_27 present a distributed cooperative localization algorithm through maximum aposteriori estimation, under the condition that continuous synchronous communication exists within the robot group. In @cite_43 , the authors present a decentralized cooperative localization approach where the robots need to communicate only during the presence of relative measurements, an algorithm we use in our paper to facilitate inter-MAV data fusion. @cite_42 propose a multi robot localization algorithm that can handle unknown initial poses and solves the data association problem through expectation maximization. Knuth and Barooah @cite_2 propose a distributed algorithm for GPS-denied scenarios, where the robots fuse each other's information and average the relative pose data in order to achieve cooperative estimation. | {
"cite_N": [
"@cite_37",
"@cite_42",
"@cite_43",
"@cite_27",
"@cite_2"
],
"mid": [
"2138400911",
"2046918204",
"1972785301",
"2146702612",
"2550064158"
],
"abstract": [
"In this paper we consider the problem of simultaneously localizing all members of a team of robots. Each robot is equipped with proprioceptive sensors and exteroceptive sensors. The latter provide relative observations between the robots. Proprioceptive and exteroceptive data are fused with an Extended Kalman Filter. We derive the equations for this estimator for the most general relative observation between two robots. Then we consider three special cases of relative observations and we present the structure of the filter for each case. Finally, we study the performance of the approach through many accurate simulations.",
"This paper presents a novel approach for multirobot pose graph localization and data association without requiring prior knowledge about the initial relative poses of the robots. Without a common reference frame, the robots can only share observations of interesting parts of the environment, and trying to match between observations from different robots will result in many outlier correspondences. Our approach is based on the following key observation: while each multi-robot correspondence can be used in conjunction with the local robot estimated trajectories, to calculate the transformation between the robot reference frames, only the inlier correspondences will be similar to each other. Using this concept, we develop an expectation-maximization (EM) approach to efficiently infer the robot initial relative poses and solve the multi-robot data association problem. Once this transformation between the robot reference frames is estimated with sufficient measure of confidence, we show that a similar EM formulation can be used to solve also the full multi-robot pose graph problem with unknown multi-robot data association. We evaluate the performance of the developed approach both in a statistical synthetic-environment study and in a real-data experiment, demonstrating its robustness to high percentage of outliers.",
"In this paper, we present a Covariance Intersection (CI)-based algorithm for reducing the processing and communication complexity of multi-robot Cooperative Localization (CL). Specifically, for a team of N robots, our proposed approximate CI-based CL approach has processing and communication complexity only linear, O(N), in the number of robots. Moreover, and in contrast to alternative approximate methods, our approach is provably consistent, can handle asynchronous communication, and does not place any restriction on the robots' motion. We test the performance of our proposed approach in both simulations and experimentally, and show that it outperforms the existing linear-complexity split CI-based CL method.",
"This paper presents a distributed Maximum A Posteriori (MAP) estimator for multi-robot Cooperative Localization (CL). As opposed to centralized MAP-based CL, the proposed algorithm reduces the memory and processing requirements by distributing data and computations amongst the robots. Specifically, a distributed data-allocation scheme is presented that enables robots to simultaneously process and update their local data. Additionally, a distributed Conjugate Gradient algorithm is employed that reduces the cost of computing the MAP estimates, while utilizing all available resources in the team and increasing robustness to single-point failures. Finally, a computationally efficient distributed marginalization of past robot poses is introduced for limiting the size of the optimization problem. The communication and computational complexity of the proposed algorithm is described in detail, while extensive simulation studies are presented for validating the performance of the distributed MAP estimator and comparing its accuracy to that of existing approaches.",
"We propose a distributed algorithm for estimating the poses (positions and orientations) of multiple autonomous vehicles in GPS denied scenarios when pairs of vehicles can measure each other's relative pose in their local coordinates. Currently, navigation of an autonomous vehicle in GPS denied scenarios is achieved by integrating relative pose measurements between successive time instants that are obtained from onboard sensors, such as cameras and IMUs. However, this suffers from a high rate of error growth over time. We seek methods to ameliorate this error growth by using cooperation among a group of vehicles. Measurements of relative pose between certain pairs of vehicles provide extra information on their poses, which can be used for improving localization accuracy. We designed a distributed algorithm to fuse all the relative pose measurements to compute a more accurate estimate of all the vehicles' poses than what is possible by the vehicles individually. The algorithm is fully distributed since only neighboring vehicles need to exchange information periodically. Monte Carlo simulations show that the error in the location estimates obtained by using this algorithm is significantly lower than what is achieved when vehicles estimate their poses without cooperation."
]
} |
1905.02800 | 2943947403 | Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matching between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay @math is incurred during which no data can be transmitted. For the offline version of the problem we present a @math approximation ratio (for any constant @math ). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) @math -competitive ratio (for any constant @math ) that exceeds time by an additive factor of @math . We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one. | Venkatakrishnan et. al. @cite_26 were the first to formally introduce the offline variant of the circuit switch scheduling problem. They focused on the special case that all entries of the data matrix are significantly small, and analyzed the greedy algorithm. Though it is known that the greedy algorithm does not provide any worst-case approximation guarantee for the general case of maximizing a monotone submodular function given a knapsack constraint, @cite_26 proved that in the special case of small demand values they obtain an (almost) tight approximation guarantee. To the best of our knowledge, our algorithm gives the best provable bound for the offline variant of the circuit switch scheduling problem. A different related variant of the problem is when data does not have to reach its destination in one step, i.e., data can go through several different servers until it reaches its destination @cite_14 @cite_19 @cite_26 . | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_26"
],
"mid": [
"2780364157",
"2619707440",
""
],
"abstract": [
"Hybrid circuit and packet switching for data center networking (DCN) has received considerable research attention recently. A hybrid-switched DCN employs a much faster circuit switch that is reconfigurable with a nontrivial cost, and a much slower packet switch that is reconfigurable with no cost, to interconnect its racks of servers. The research problem is, given a traffic demand matrix (between the racks), how to compute a good circuit switch configuration schedule so that the vast majority of the traffic demand is removed by the circuit switch, leaving a remaining demand matrix that contains only small elements for the packet switch to handle. In this paper, we propose two new hybrid switch scheduling algorithms under two different scheduling constraints. Our first algorithm, called 2-hop Eclipse, strikes a much better tradeoff between the resulting performance (of the hybrid switch) and the computational complexity (of the algorithm) than the state of the art solution Eclipse Eclipse++. Our second algorithm, called BFF (best first fit), is the first hybrid switching solution that exploits the potential partial reconfiguration capability of the circuit switch for performance gains.",
"Increasingly, proposals for new datacenter networking fabrics employ some form of traffic scheduling---often to avoid congestion, mitigate queuing delays, or avoid timeouts. Fundamentally, practical implementations require estimating upcoming traffic demand. Unfortunately, as our results show, it is difficult to accurately predict demand in typical datacenter applications more than a few milliseconds ahead of time. We explore the impact of errors in demand estimation on traffic scheduling in circuit-switched networks. We show that even relatively small estimation errors such as shifting the arrival time of at most 30 of traffic by a few milliseconds can lead to suboptimal schedules that dramatically reduce network efficiency. Existing systems cope by provisioning extra capacity---either on each circuit, or through the addition of a separate packet-switched fabric. We show through simulation that indirect traffic routing is a powerful technique for recovering from the inefficiencies of suboptimal scheduling under common datacenter workloads, performing as well as networks with 16 extra circuit bandwidth or a packet switch with 6 of the circuit bandwidth.",
""
]
} |
1905.02800 | 2943947403 | Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matching between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay @math is incurred during which no data can be transmitted. For the offline version of the problem we present a @math approximation ratio (for any constant @math ). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) @math -competitive ratio (for any constant @math ) that exceeds time by an additive factor of @math . We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one. | A dual approach is given by Liu et. al. @cite_13 , who aim to minimize the total needed time to transmit the entire demand matrix. Since our algorithm aims to maximize the transmitted data in a time window of @math , one can use our algorithm as a black box while optimizing over @math . It was proven in @cite_22 that the problem of minimizing the time needed to send all of the data is NP-Complete. Hence, we can conclude that the circuit switch scheduling problem is also NP-Complete. | {
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"2530249205",
"2158258775"
],
"abstract": [
"A range of new datacenter switch designs combine wireless or optical circuit technologies with electrical packet switching to deliver higher performance at lower cost than traditional packet-switched networks. These \"hybrid\" networks schedule large traffic demands via a high-rate circuits and remaining traffic with a lower-rate, traditional packet-switches. Achieving high utilization requires an efficient scheduling algorithm that can compute proper circuit configurations and balance traffic across the switches. Recent proposals, however, provide no such algorithm and rely on an omniscient oracle to compute optimal switch configurations. Finding the right balance of circuit and packet switch use is difficult: circuits must be reconfigured to serve different demands, incurring non-trivial switching delay, while the packet switch is bandwidth constrained. Adapting existing crossbar scheduling algorithms proves challenging with these constraints. In this paper, we formalize the hybrid switching problem, explore the design space of scheduling algorithms, and provide insight on using such algorithms in practice. We propose a heuristic-based algorithm, Solstice that provides a 2.9× increase in circuit utilization over traditional scheduling algorithms, while being within 14 of optimal, at scale.",
"Using optical technology for the design of packet switches routers offers several advantages such as scalability, high bandwidth, power consumption, and cost. However, reconfiguring the optical fabric of these switches requires significant time under current technology (microelectromechanical system mirrors, tunable elements, bubble switches, etc.). As a result, conventional slot-by-slot scheduling may severely cripple the performance of these optical switches due to the frequent fabric reconfiguration that may entail. A more appropriate way is to use a time slot assignment (TSA) scheduling approach to slow down the scheduling rate. The switch gathers the incoming packets periodically and schedules them in batches, holding each fabric configuration for a period of time. The goal is to minimize the total transmission time, which includes the actual traffic-sending process and the reconfiguration overhead. This optical switch scheduling problem is defined in this paper and proved to be NP-complete. In particular, earlier TSA algorithms normally assume the reconfiguration delay to be either zero or infinity for simplicity. To this end, we propose a practical algorithm, ADJUST, that breaks this limitation and self-adjusts with different reconfiguration delay values. The algorithm runs at O( spl lambda N sup 2 logN) time complexity and guarantees 100 throughput and bounded worst-case delay. In addition, it outperforms existing TSA algorithms across a large spectrum of reconfiguration values."
]
} |
1905.02800 | 2943947403 | Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matching between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay @math is incurred during which no data can be transmitted. For the offline version of the problem we present a @math approximation ratio (for any constant @math ). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) @math -competitive ratio (for any constant @math ) that exceeds time by an additive factor of @math . We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one. | Regarding the theoretical problem of maximizing a monotone submodular function given a knapsack constraint, Sviridenko @cite_2 (building upon the work of Khuller et. al. @cite_6 ) presented a tight @math -approximation algorithm. This tight algorithm enumerates over all subsets of elements of size at most three, and greedily extends each subset of size three, and returns the best solution found. Deviating from the above combinatorial approach of @cite_6 @cite_2 , Badanidiyuru and Vondr ' a k @cite_0 and Ene and Nguyen @cite_9 present algorithms that are based on an approach that extrapolates between continuous and discrete techniques. Unfortunately, as previously mentioned, none of the above algorithms can be directly applied to the circuit switch problem due to the size of the ground set. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_6",
"@cite_2"
],
"mid": [
"2252172643",
"2759321404",
"",
"2033885045"
],
"abstract": [
"There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1--1 e -- e)-approximation for a cardinality constraint using O(n e log n e) queries, and a 1 (p + 2e + 1 + e)-approximation for the intersection of a p-system and e knapsack (linear) constraints using O (n e2 log2 n e) queries. This is the first approximation for a p-system combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a p-system.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using O(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least O(n4) oracle calls.) This leads to an O(n2 e4 log2 n e)-time (1--1 e -- e)-approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1--1 e -- e)-approximation algorithm that runs in time O(n2(1 e log n)poly(1 e)).",
"We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, @math approximation, using @math function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to @math running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle.",
"",
"In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations."
]
} |
1905.02479 | 2947728370 | Cosine-based softmax losses significantly improve the performance of deep face recognition networks. However, these losses always include sensitive hyper-parameters which can make training process unstable, and it is very tricky to set suitable hyper parameters for a specific dataset. This paper addresses this challenge by directly designing the gradients for adaptively training deep neural networks. We first investigate and unify previous cosine softmax losses by analyzing their gradients. This unified view inspires us to propose a novel gradient called P2SGrad (Probability-to-Similarity Gradient), which leverages a cosine similarity instead of classification probability to directly update the testing metrics for updating neural network parameters. P2SGrad is adaptive and hyper-parameter free, which makes the training process more efficient and faster. We evaluate our P2SGrad on three face recognition benchmarks, LFW, MegaFace, and IJB-C. The results show that P2SGrad is stable in training, robust to noise, and achieves state-of-the-art performance on all the three benchmarks. | The accuracy improvements of face recognition @cite_0 @cite_21 @cite_32 @cite_7 enjoy the large-scale training data, and the improvements of neural network structures. Modern face datasets contain a huge number of identities, such as LFW @cite_24 , PubFig @cite_14 , CASIA-WebFace @cite_26 , MS1M @cite_9 and MegaFace @cite_12 @cite_13 , which enable the effective training of very deep neural networks. A number of recent studies demonstrated that well-designed network architectures lead to better performance, such as DeepFace @cite_22 , DeepID2, 3 @cite_29 @cite_20 and FaceNet @cite_25 . | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_21",
"@cite_29",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_20",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"1509966554",
"2145287260",
"",
"",
"",
"",
"2325939864",
"",
"1782590233",
"2140609507",
"2185089786",
"2096733369",
""
],
"abstract": [
"",
"Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"",
"",
"",
"",
"The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.",
"",
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.",
"The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net and GoogLeNet to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53 LFW face verification accuracy and 96.0 LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end.",
"Recent face recognition experiments on a major benchmark LFW show stunning performance--a number of algorithms achieve near to perfect score, surpassing human recognition rates. In this paper, we advocate evaluations at the million scale (LFW includes only 13K photos of 5K people). To this end, we have assembled the MegaFace dataset and created the first MegaFace challenge. Our dataset includes One Million photos that capture more than 690K different individuals. The challenge evaluates performance of algorithms with increasing numbers of distractors (going from 10 to 1M) in the gallery set. We present both identification and verification performance, evaluate performance with respect to pose and a person's age, and compare as a function of training data size (number of photos and people). We report results of state of the art and baseline algorithms. Our key observations are that testing at the million scale reveals big performance differences (of algorithms that perform similarly well on smaller scale) and that age invariant recognition as well as pose are still challenging for most. The MegaFace dataset, baseline code, and evaluation scripts, are all publicly released for further experimentations at: megaface.cs.washington.edu.",
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
""
]
} |
1905.02479 | 2947728370 | Cosine-based softmax losses significantly improve the performance of deep face recognition networks. However, these losses always include sensitive hyper-parameters which can make training process unstable, and it is very tricky to set suitable hyper parameters for a specific dataset. This paper addresses this challenge by directly designing the gradients for adaptively training deep neural networks. We first investigate and unify previous cosine softmax losses by analyzing their gradients. This unified view inspires us to propose a novel gradient called P2SGrad (Probability-to-Similarity Gradient), which leverages a cosine similarity instead of classification probability to directly update the testing metrics for updating neural network parameters. P2SGrad is adaptive and hyper-parameter free, which makes the training process more efficient and faster. We evaluate our P2SGrad on three face recognition benchmarks, LFW, MegaFace, and IJB-C. The results show that P2SGrad is stable in training, robust to noise, and achieves state-of-the-art performance on all the three benchmarks. | In face recognition, feature representation normalization, which restricts features to lie on a fixed-radius hyper-sphere, is a common operation to enhance models' final performance. COCO loss @cite_1 @cite_17 and NormFace @cite_15 studied the effect of normalization through mathematical analysis and proposed two strategies through reformulating softmax loss and metric learning. Coincidentally, L2-softmax @cite_31 also proposed a similar method. These methods obtain the same formulation of cosine softmax loss from different views. | {
"cite_N": [
"@cite_31",
"@cite_15",
"@cite_1",
"@cite_17"
],
"mid": [
"2600537992",
"2609575245",
"2594088761",
"2750672897"
],
"abstract": [
"In recent years, the performance of face verification systems has significantly improved using deep convolutional neural networks (DCNNs). A typical pipeline for face verification includes training a deep network for subject classification with softmax loss, using the penultimate layer output as the feature descriptor, and generating a cosine similarity score given a pair of face images. The softmax loss function does not optimize the features to have higher similarity score for positive pairs and lower similarity score for negative pairs, which leads to a performance gap. In this paper, we add an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius. This module can be easily implemented using existing deep learning frameworks. We show that integrating this simple step in the training pipeline significantly boosts the performance of face verification. Specifically, we achieve state-of-the-art results on the challenging IJB-A dataset, achieving True Accept Rate of 0.909 at False Accept Rate 0.0001 on the face verification protocol. Additionally, we achieve state-of-the-art performance on LFW dataset with an accuracy of 99.78 , and competing performance on YTF dataset with accuracy of 96.08 .",
"Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2 to 0.4 on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98 .",
"Person recognition aims at recognizing the same identity across time and space with complicated scenes and similar appearance. In this paper, we propose a novel method to address this task by training a network to obtain robust and representative features. The intuition is that we directly compare and optimize the cosine distance between two features - enlarging inter-class distinction as well as alleviating inner-class variance. We propose a congenerous cosine loss by minimizing the cosine distance between samples and their cluster centroid in a cooperative way. Such a design reduces the complexity and could be implemented via softmax with normalized inputs. Our method also differs from previous work in person recognition that we do not conduct a second training on the test subset. The identity of a person is determined by measuring the similarity from several body regions in the reference set. Experimental results show that the proposed approach achieves better classification accuracy against previous state-of-the-arts.",
"Feature matters. How to train a deep network to acquire discriminative features across categories and polymerized features within classes has always been at the core of many computer vision tasks, specially for large-scale recognition systems where test identities are unseen during training and the number of classes could be at million scale. In this paper, we address this problem based on the simple intuition that the cosine distance of features in high-dimensional space should be close enough within one class and far away across categories. To this end, we proposed the congenerous cosine (COCO) algorithm to simultaneously optimize the cosine similarity among data. It inherits the softmax property to make inter-class features discriminative as well as shares the idea of class centroid in metric learning. Unlike previous work where the center is a temporal, statistical variable within one mini-batch during training, the formulated centroid is responsible for clustering inner-class features to enforce them polymerized around the network truncus. COCO is bundled with discriminative training and learned end-to-end with stable convergence. Experiments on five benchmarks have been extensively conducted to verify the effectiveness of our approach on both small-scale classification task and large-scale human recognition problem."
]
} |
1905.02479 | 2947728370 | Cosine-based softmax losses significantly improve the performance of deep face recognition networks. However, these losses always include sensitive hyper-parameters which can make training process unstable, and it is very tricky to set suitable hyper parameters for a specific dataset. This paper addresses this challenge by directly designing the gradients for adaptively training deep neural networks. We first investigate and unify previous cosine softmax losses by analyzing their gradients. This unified view inspires us to propose a novel gradient called P2SGrad (Probability-to-Similarity Gradient), which leverages a cosine similarity instead of classification probability to directly update the testing metrics for updating neural network parameters. P2SGrad is adaptive and hyper-parameter free, which makes the training process more efficient and faster. We evaluate our P2SGrad on three face recognition benchmarks, LFW, MegaFace, and IJB-C. The results show that P2SGrad is stable in training, robust to noise, and achieves state-of-the-art performance on all the three benchmarks. | Optimizing auxiliary metric loss function is also a popular choice for boosting performance. In the early years, most face recognition approaches utilized metric loss functions, such as triplet loss @cite_30 and contrastive loss @cite_2 , which use Euclidean margin to measure distance between features. Taking advantages of these works, center loss @cite_16 and range loss @cite_10 were proposed to reduce intra-class variations through minimizing distance within target classes @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"2106053110",
"",
"2121647436",
"2520774990",
"2781292787"
],
"abstract": [
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"",
"We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed \"Fisherface\" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.",
"Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.",
"Deep convolutional neural networks have achieved significant improvements on face recognition task due to their ability to learn highly discriminative features from tremendous amounts of face images. Many large scale face datasets exhibit long-tail distribution where a small number of entities (persons) have large number of face images while a large number of persons only have very few face samples (long tail). Most of the existing works alleviate this problem by simply cutting the tailed data and only keep identities with enough number of examples. Unlike these work, this paper investigated how long-tailed data impact the training of face CNNs and develop a novel loss function, called range loss, to effectively utilize the tailed data in training process. More specifically, range loss is designed to reduce overall intrapersonal variations while enlarge interpersonal differences simultaneously. Extensive experiments on two face recognition benchmarks, Labeled Faces in the Wild (LFW) [11] and YouTube Faces (YTF) [33], demonstrate the effectiveness of the proposed range loss in overcoming the long tail effect, and show the good generalization ability of the proposed methods."
]
} |
1905.02479 | 2947728370 | Cosine-based softmax losses significantly improve the performance of deep face recognition networks. However, these losses always include sensitive hyper-parameters which can make training process unstable, and it is very tricky to set suitable hyper parameters for a specific dataset. This paper addresses this challenge by directly designing the gradients for adaptively training deep neural networks. We first investigate and unify previous cosine softmax losses by analyzing their gradients. This unified view inspires us to propose a novel gradient called P2SGrad (Probability-to-Similarity Gradient), which leverages a cosine similarity instead of classification probability to directly update the testing metrics for updating neural network parameters. P2SGrad is adaptive and hyper-parameter free, which makes the training process more efficient and faster. We evaluate our P2SGrad on three face recognition benchmarks, LFW, MegaFace, and IJB-C. The results show that P2SGrad is stable in training, robust to noise, and achieves state-of-the-art performance on all the three benchmarks. | Simply using Euclidean distance or Euclidean margin is insufficient to maximize the classification performance. To circumvent this difficulty, angular margin based softmax loss functions were proposed and became popular in face recognition. Angular constraints were added to traditional softmax loss function to improve feature discriminativeness in L-softmax @cite_27 and A-softmax @cite_18 , where A-softmax applied weight normalization but L-softmax @cite_27 did not. CosFace @cite_28 , AM-softmax @cite_4 and ArcFace @cite_19 also embraced the idea of angular margins and employed simpler as well as more intuitive loss functions compared with aforementioned methods. Normalization is applied to both features and weights in these methods. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_28",
"@cite_19",
"@cite_27"
],
"mid": [
"2963466847",
"2784163702",
"2786817236",
"2784874046",
""
],
"abstract": [
"This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"In this letter, we propose a conceptually simple and intuitive learning objective function, i.e., additive margin softmax, for face verification. In general, face verification tasks can be viewed as metric learning problems, even though lots of face verification models are trained in classification schemes. It is possible when a large-margin strategy is introduced into the classification model to encourage intraclass variance minimization. As one alternative, angular softmax has been proposed to incorporate the margin. In this letter, we introduce another kind of margin to the softmax loss function, which is more intuitive and interpretable. Experiments on LFW and MegaFace show that our algorithm performs better when the evaluation criteria are designed for very low false alarm rate.",
"Face recognition has achieved revolutionary advancement owing to the advancement of the deep convolutional neural network (CNN). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, traditional softmax loss of deep CNN usually lacks the power of discrimination. To address this problem, recently several loss functions such as central loss centerloss , large margin softmax loss lsoftmax , and angular softmax loss sphereface have been proposed. All these improvement algorithms share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we design a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as cosine loss by L2 normalizing both features and weight vectors to remove radial variation, based on which a cosine margin term is introduced to further maximize decision margin in angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. To test our approach, extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmark experiments, which confirms the effectiveness of our approach.",
"One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that enhance discriminative power. Centre loss penalises the distance between the deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in an angular space and penalises the angles between the deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere. We present arguably the most extensive experimental evaluation of all the recent state-of-the-art face recognition methods on over 10 face recognition benchmarks including a new large-scale image database with trillion level of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead. We release all refined training data, training codes, pre-trained models and training logs, which will help reproduce the results in this paper.",
""
]
} |
1905.02419 | 2944730351 | Recently average heart rate (HR) can be measured relatively accurately from human face videos based on non-contact remote photoplethysmography (rPPG). However in many healthcare applications, knowing only the average HR is not enough, and measured blood volume pulse signal and its heart rate variability (HRV) features are also important. We propose the first end-to-end rPPG signal recovering system (PhysNet) using deep spatio-temporal convolutional networks to measure both HR and HRV features. PhysNet extracts the spatial and temporal hidden features simultaneously from raw face sequences while outputs the corresponding rPPG signal directly. The temporal context information helps the network learn more robust features with less fluctuation. Our approach was tested on two datasets, and achieved superior performance of HR and HRV features comparing to the state-of-the-art methods. | In the past few years, several studies explored measuring HR remotely from face videos by analysing facial color changes. @cite_9 introduced independent component analysis to decompose the original RGB channel signals into independent non-Gaussian signals to measure HR. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1984554603"
],
"abstract": [
"Remote measurements of the cardiac pulse can provide comfortable physiological assessment without electrodes. However, attempts so far are non-automated, susceptible to motion artifacts and typically expensive. In this paper, we introduce a new methodology that overcomes these problems. This novel approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into independent components. Using Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to an FDA-approved finger blood volume pulse (BVP) sensor and achieved high accuracy and correlation even in the presence of movement artifacts. Furthermore, we applied this technique to perform heart rate measurements from three participants simultaneously. This is the first demonstration of a low-cost accurate video-based method for contact-free heart rate measurements that is automated, motion-tolerant and capable of performing concomitant measurements on more than one person at a time."
]
} |
1905.02419 | 2944730351 | Recently average heart rate (HR) can be measured relatively accurately from human face videos based on non-contact remote photoplethysmography (rPPG). However in many healthcare applications, knowing only the average HR is not enough, and measured blood volume pulse signal and its heart rate variability (HRV) features are also important. We propose the first end-to-end rPPG signal recovering system (PhysNet) using deep spatio-temporal convolutional networks to measure both HR and HRV features. PhysNet extracts the spatial and temporal hidden features simultaneously from raw face sequences while outputs the corresponding rPPG signal directly. The temporal context information helps the network learn more robust features with less fluctuation. Our approach was tested on two datasets, and achieved superior performance of HR and HRV features comparing to the state-of-the-art methods. | After that, several methods based on region of interest (ROI) selection from the face have been studied. In @cite_13 , firstly defined and tracked the particular ROI of face in every frame, then used least mean square filter and non-rigid motion elimination to obtain a more robust vital signal. @cite_17 , Lam and Kuno used multiple randomly sampled blocks from the already-defined ROI to form multiple smaller ROIs, and then used majority voting to make the final prediction. @cite_14 separated the wrapped face ROI into multiple blocks and used a matrix completion approach to yield a final filtered signal. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_17"
],
"mid": [
"2472200183",
"1986273245",
"2218803975"
],
"abstract": [
"Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. Our approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation. Thorough experimental evaluation conducted on public benchmarks suggests that the proposed approach significantly outperforms state-of the-art HR estimation methods in naturalistic conditions.",
"Heart rate is an important indicator of people's physiological state. Recently, several papers reported methods to measure heart rate remotely from face videos. Those methods work well on stationary subjects under well controlled conditions, but their performance significantly degrades if the videos are recorded under more challenging conditions, specifically when subjects' motions and illumination variations are involved. We propose a framework which utilizes face tracking and Normalized Least Mean Square adaptive filtering methods to counter their influences. We test our framework on a large difficult and public database MAHNOB-HCI and demonstrate that our method substantially outperforms all previous methods. We also use our method for long term heart rate monitoring in a game evaluation scenario and achieve promising results.",
"The ability to remotely measure heart rate from videos without requiring any special setup is beneficial to many applications. In recent years, a number of papers on heart rate (HR) measurement from videos have been proposed. However, these methods typically require the human subject to be stationary and for the illumination to be controlled. For methods that do take into account motion and illumination changes, strong assumptions are still made about the environment (e.g. background can be used for illumination rectification). In this paper, we propose an HR measurement method that is robust to motion, illumination changes, and does not require use of an environment's background. We present conditions under which cardiac activity extraction from local regions of the face can be treated as a linear Blind Source Separation problem and propose a simple but robust algorithm for selecting good local regions. The independent HR estimates from multiple local regions are then combined in a majority voting scheme that robustly recovers the HR. We validate our algorithm on a large database of challenging videos."
]
} |
1905.02419 | 2944730351 | Recently average heart rate (HR) can be measured relatively accurately from human face videos based on non-contact remote photoplethysmography (rPPG). However in many healthcare applications, knowing only the average HR is not enough, and measured blood volume pulse signal and its heart rate variability (HRV) features are also important. We propose the first end-to-end rPPG signal recovering system (PhysNet) using deep spatio-temporal convolutional networks to measure both HR and HRV features. PhysNet extracts the spatial and temporal hidden features simultaneously from raw face sequences while outputs the corresponding rPPG signal directly. The temporal context information helps the network learn more robust features with less fluctuation. Our approach was tested on two datasets, and achieved superior performance of HR and HRV features comparing to the state-of-the-art methods. | Most of the mentioned studies only focused on average HR measurement. The HR counts the total number of heartbeats in a given time period, which is coarse for describing the cardiac activity. On the other side, HRV features describe heart activity on a much finer scale, which are computed by analysing the IBI of pulse signals. Most common HRV features include low frequency (LF), high frequency (HF), and their ratio of LF HF and more, which are widely used in most medical applications. Besides, the respiratory frequency (RF) can also be estimated by analyzing the frequency power of IBIs, as in @cite_3 and @cite_4 . Apparently, compared with the task of estimating the average HR (only one number), measuring HRV features is more challenging, which requires accurate measure of the time location of each individual pulse peak. For the needs of most healthcare applications, average HR is far from enough. We need to step forward to develop methods that can measure heart activity on HRV level. | {
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2805424946",
"2008821584"
],
"abstract": [
"Physiological signals, including heart rate (HR), heart rate variability (HRV), and respiratory frequency (RF) are important indicators of our health, which are usually measured in clinical examinations. Traditional physiological signal measurement often involves contact sensors, which may be inconvenient or cause discomfort in long-term monitoring sessions. Recently, there were studies exploring remote HR measurement from facial videos, and several methods have been proposed. However, previous methods cannot be fairly compared, since they mostly used private, self-collected small datasets as there has been no public benchmark database for the evaluation. Besides, we haven't found any study that validates such methods for clinical applications yet, e.g., diagnosing cardiac arrhythmias disease, which could be one major goal of this technology. In this paper, we introduce the Oulu Bio-Face (OBF) database as a benchmark set to fill in the blank. The OBF database includes large number of facial videos with simultaneously recorded reference physiological signals. The data were recorded both from healthy subjects and from patients with atrial fibrillation (AF), which is the most common sustained and widespread cardiac arrhythmia encountered in clinical practice. Accuracy of HR, HRV and RF measured from OBF videos are provided as the baseline results for future evaluation. We also demonstrated that the video-extracted HRV features can achieve promising performance for AF detection, which has never been studied before. From a wider outlook, the remote technology may lead to convenient self-examination in mobile condition for earlier diagnosis of the arrhythmia.",
"We present a simple, low-cost method for measuring multiple physiological parameters using a basic webcam. By applying independent component analysis on the color channels in video recordings, we extracted the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability (HRV, an index for cardiac autonomic activity) were subsequently quantified and compared to corresponding measurements using Food and Drug Administration-approved sensors. High degrees of agreement were achieved between the measurements across all physiological parameters. This technology has significant potential for advancing personal health care and telemedicine."
]
} |
1905.02319 | 2944459812 | This paper proposes a novel 4D Facial Expression Recognition (FER) method using Collaborative Cross-domain Dynamic Image Network (CCDN). Given a 4D data of face scans, we first compute its geometrical images, and then combine their correlated information in the proposed cross-domain image representations. The acquired set is then used to generate cross-domain dynamic images (CDI) via rank pooling that encapsulates facial deformations over time in terms of a single image. For the training phase, these CDIs are fed into an end-to-end deep learning model, and the resultant predictions collaborate over multi-views for performance gain in expression classification. Furthermore, we propose a 4D augmentation scheme that not only expands the training data scale but also introduces significant facial muscle movement patterns to improve the FER performance. Results from extensive experiments on the commonly used BU-4DFE dataset under widely adopted settings show that our proposed method outperforms the state-of-the-art 4D FER methods by achieving an accuracy of 96.5 indicating its effectiveness. | As opposed to 3D FER, which does not contain the temporal information over geometrical domains @cite_18 @cite_7 @cite_21 @cite_1 , 4D facial data allows to capture an in-depth knowledge about the facial deformation patterns encoding a specific facial expression. Sun al @cite_30 worked out a way to generate correspondence between the 3D face scans over time. Driven by these correspondences, they proposed the idea of using spatio-temporal Hidden Markov Models (HMM) that capture the facial muscle movements by analyzing both inter-frame and intra-frame variations. Similarly, Yin al @cite_13 utilized a 2D HMM to learn the facial deformations in the temporal domain for expression classification. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_13"
],
"mid": [
"2123937777",
"2018776244",
"2102539767",
"",
"1040410175",
""
],
"abstract": [
"Research in the areas of 3-D face recognition and 3-D facial expression analysis has intensified in recent years. However, most research has been focused on 3-D static data analysis. In this paper, we investigate the facial analysis problem using dynamic 3-D face model sequences. One of the major obstacles for analyzing such data is the lack of correspondences of features due to the variable number of vertices across individual models or 3-D model sequences. In this paper, we present an effective approach for establishing vertex correspondences using a tracking-model-based approach for vertex registration, coarse-to-fine model adaptation, and vertex motion trajectory (called vertex flow) estimation. We propose to establish correspondences across frame models based on a 2-D intermediary, which is generated using conformal mapping and a generic model adaptation algorithm. Based on our newly created 3-D dynamic face database, we also propose to use a spatiotemporal hidden Markov model (ST-HMM) that incorporates 3-D surface feature characterization to learn the spatial and temporal information of faces. The advantage of using 3-D dynamic data for face recognition has been evaluated by comparing our approach to three conventional approaches: 2-D-video-based temporal HMM model, conventional 2-D-texture-based approach (e.g., Gabor-wavelet-based approach), and static 3-D-model-based approaches. To further evaluate the usefulness of vertex flow and the adapted model, we have also applied a spatial-temporal face model descriptor for facial expression classification based on dynamic 3-D model sequences.",
"Automatic facial expression recognition on 3D face data is still a challenging problem. In this paper we propose a novel approach to perform expression recognition automatically and flexibly by combining a Bayesian Belief Net (BBN) and Statistical facial feature models (SFAM). A novel BBN is designed for the specific problem with our proposed parameter computing method. By learning global variations in face landmark configuration (morphology) and local ones in terms of texture and shape around landmarks, morphable Statistic Facial feature Model (SFAM) allows not only to perform an automatic landmarking but also to compute the belief to feed the BBN. Tested on the public 3D face expression database BU-3DFE, our automatic approach allows to recognize expressions successfully, reaching an average recognition rate over 82 .",
"In this paper, we propose a fully automatic approach for person-independent 3D facial expression recognition. In order to extract discriminative expression features, each aligned 3D facial surface is compactly represented as multiple global histograms of local normal patterns from multiple normal components and multiple binary encoding scales, namely Multi-Scale Local Normal Patterns (MS-LNPs). 3D facial expression recognition is finally carried out by modeling multiple kernel learning (MKL) to efficiently embed and combine these histogram based features. By using the SimpleMKL algorithm with the chi-square kernel, we achieved an average recognition rate of 80.14 based on a fair experimental setup. To the best of our knowledge, our method outperforms most of the state-of-the-art ones.",
"",
"We propose a feature-based 2D+3D multimodal facial expression recognition method.It is fully automatic benefit from a large set of automatically detected landmarks.The complementarities between 2D and 3D features are comprehensively demonstrated.Our method achieves the best accuracy on the BU-3DFE database so far.A good generalization ability is shown on the Bosphorus database. We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32 . Moreover, a good generalization ability is shown on the Bosphorus database.",
""
]
} |
1905.02319 | 2944459812 | This paper proposes a novel 4D Facial Expression Recognition (FER) method using Collaborative Cross-domain Dynamic Image Network (CCDN). Given a 4D data of face scans, we first compute its geometrical images, and then combine their correlated information in the proposed cross-domain image representations. The acquired set is then used to generate cross-domain dynamic images (CDI) via rank pooling that encapsulates facial deformations over time in terms of a single image. For the training phase, these CDIs are fed into an end-to-end deep learning model, and the resultant predictions collaborate over multi-views for performance gain in expression classification. Furthermore, we propose a 4D augmentation scheme that not only expands the training data scale but also introduces significant facial muscle movement patterns to improve the FER performance. Results from extensive experiments on the commonly used BU-4DFE dataset under widely adopted settings show that our proposed method outperforms the state-of-the-art 4D FER methods by achieving an accuracy of 96.5 indicating its effectiveness. | To classify FEs using Support Vector Machine (SVM) with a Radial Basis Function kernel, Fang al @cite_16 extracted two types of feature vectors represented as geometrical coordinates and its normal. In another of their work @cite_2 , they first exploited MeshHOG to calibrate the given face meshes. Afterwards, the dynamic Local Binary Patterns (LBP) were applied to capture deformations over time, followed by SVM for FER. Likewise, the authors of @cite_25 proposed a spatio-temporal feature that uses LBP to extract information encapsulated in the histogram of different facial regions as polar angles and curvatures. | {
"cite_N": [
"@cite_16",
"@cite_25",
"@cite_2"
],
"mid": [
"2011556862",
"2076617402",
""
],
"abstract": [
"Facial expression analysis has interested many researchers in the past decade due to its potential applications in various fields such as human-computer interaction, psychological studies, and facial animation. Three-dimensional facial data has been proven to be insensitive to illumination condition and head pose, and has hence gathered attention in recent years. In this paper, we focus on discrete expression classification using 3D data from the human face. The paper is divided in two parts. In the first part, we present improvement to the fitting of the Annotated Face Model (AFM) so that a dense point correspondence can be found in terms of both position and semantics among static 3D face scans or frames in 3D face sequences. Then, an expression recognition framework on static 3D images is presented. It is based on a Point Distribution Model (PDM) which can be built on different features. In the second part of this article, a systematic pipeline that operates on dynamic 3D sequences (4D datasets or 3D videos) is proposed and alternative modules are investigated as a comparative study. We evaluated both 3D and 4D Facial Expression Recognition pipelines on two publicly available facial expression databases and obtained promising results.",
"In this paper, we propose a new, compact, 4D spatio-temporal “Nebula” feature to improve expression and facial movement analysis performance. Given a spatio-temporal volume, the data is voxelized and fit to a cubic polynomial. A label is assigned based on the principal curvature values, and the polar angles of the direction of least curvature are computed. The labels and angles for each feature are used to build a histogram for each region of the face. The concatenated histograms from each region give us our final feature vector. This feature description is tested on the posed expression database BU-4DFE and on a new 4D spontaneous expression database. Various region configurations, histogram sizes, and feature parameters are tested, including a non-dynamic version of the approach. The LBP-TOP approach on the texture image as well as on the depth image is also tested for comparison. The onsets of the six canonical expressions are classified for 100 subjects in BU-4DFE, while the onset, offset, and non-existence of 12 Action Units (AUs) are classified for 16 subjects from our new spontaneous database. For posed expression recognition, the Nebula feature approach shows improvement over LBPTOP on the depth images and significant improvement over the non-dynamic 3D-only approach. Moreover, the Nebula feature performs better for AU classification than the compared approaches for 11 of the AUs tested in terms of accuracy as well as Area Under Receiver Operating Characteristic Curve (AUC).",
""
]
} |
1905.02319 | 2944459812 | This paper proposes a novel 4D Facial Expression Recognition (FER) method using Collaborative Cross-domain Dynamic Image Network (CCDN). Given a 4D data of face scans, we first compute its geometrical images, and then combine their correlated information in the proposed cross-domain image representations. The acquired set is then used to generate cross-domain dynamic images (CDI) via rank pooling that encapsulates facial deformations over time in terms of a single image. For the training phase, these CDIs are fed into an end-to-end deep learning model, and the resultant predictions collaborate over multi-views for performance gain in expression classification. Furthermore, we propose a 4D augmentation scheme that not only expands the training data scale but also introduces significant facial muscle movement patterns to improve the FER performance. Results from extensive experiments on the commonly used BU-4DFE dataset under widely adopted settings show that our proposed method outperforms the state-of-the-art 4D FER methods by achieving an accuracy of 96.5 indicating its effectiveness. | Yao al @cite_3 applied the scattering operator @cite_4 over 4D data, producing geometrical and textual scattering representations. Multiple Kernel Learning (MKL) was then applied to learn from this information for FER. Authors in @cite_31 presented a statistical shape model with global and local constraints in an attempt to recognize FEs. They claimed that the combination of global face shape and local shape index-based information can be handy for FER. Li al @cite_17 introduced a Dynamic Geometrical Image Network (DGIN) for automatically recognizing expressions. Given 4D data, the differential geometry quantities are estimated first, followed by generating geometrical images. These images are then fed into the DGIN for an end-to-end training. The prediction results are based on fusing the predicted scores of different types of geometrical images. | {
"cite_N": [
"@cite_31",
"@cite_4",
"@cite_3",
"@cite_17"
],
"mid": [
"2891410536",
"2072072671",
"2794377003",
"1603155784"
],
"abstract": [
"In this paper, we propose a novel method for 3D facial expression recognition based on a statistical shape model with global and local constraints. We show that the combination of the global shape of the face, along with local shape index-based information can be used to recognize a range of expressions. These expressions include happiness, sadness, surprise, embarrassment, fear, nervousness, anger, disgust, and pain. We give insights into which features are important for facial expression recognition through statistical analysis. We also show that our proposed method outperforms the current state-of-the-art methods on spontaneous and non-spontaneous facial data.",
"A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"Facial Expression Recognition (FER) is one of the most important topics in the domain of computer vision and pattern recognition, and it has attracted increasing attention for its scientific challenges and application potentials. In this article, we propose a novel and effective approach to FER using multi-model two-dimensional (2D) and 3D videos, which encodes both static and dynamic clues by scattering convolution network. First, a shape-based detection method is introduced to locate the start and the end of an expression in videos; segment its onset, apex, and offset states; and sample the important frames for emotion analysis. Second, the frames in Apex of 2D videos are represented by scattering, conveying static texture details. Those of 3D videos are processed in a similar way, but to highlight static shape details, several geometric maps in terms of multiple order differential quantities, i.e., Normal Maps and Shape Index Maps, are generated as the input of scattering, instead of original smooth facial surfaces. Third, the average of neighboring samples centred at each key texture frame or shape map in Onset is computed, and the scattering features extracted from all the average samples of 2D and 3D videos are then concatenated to capture dynamic texture and shape cues, respectively. Finally, Multiple Kernel Learning is adopted to combine the features in the 2D and 3D modalities and compute similarities to predict the expression label. Thanks to the scattering descriptor, the proposed approach not only encodes distinct local texture and shape variations of different expressions as by several milestone operators, such as SIFT, HOG, and so on, but also captures subtle information hidden in high frequencies in both channels, which is quite crucial to better distinguish expressions that are easily confused. The validation is conducted on the BU-4DFE and BP-4D databa ses, and the accuracies reached are very competitive, indicating its competency for this issue.",
"Facial Expression Recognition (FER) is one of the most active topics in the domain of computer vision and pattern recognition, and it has received increasing attention for its wide application potentials as well as attractive scientific challenges. In this paper, we present a novel method to automatic 3D FER based on geometric scattering representation. A set of maps of shape features in terms of multiple order differential quantities, i.e. the Normal Maps (NOM) and the Shape Index Maps (SIM), are first jointly adopted to comprehensively describe geometry attributes of the facial surface. The scattering operator is then introduced to further highlight expression related cues on these maps, thereby constructing geometric scattering representations of 3D faces for classification. The scattering descriptor not only encodes distinct local shape changes of various expressions as by several milestone descriptors, such as SIFT, HOG, etc., but also captures subtle information hidden in high frequencies, which is quite crucial to better distinguish expressions that are easily confused. We evaluate the proposed approach on the BU-3DFE database, and the performance is up to 84.8 and 82.7 with two commonly used protocols respectively which is superior to the state of the art ones."
]
} |
1905.02540 | 2947883281 | We focus on the word-level visual lipreading, which requires recognizing the word being spoken, given only the video but not the audio. State-of-the-art methods explore the use of end-to-end neural networks, including a shallow (up to three layers) 3D convolutional neural network (CNN) + a deep 2D CNN (, ResNet) as the front-end to extract visual features, and a recurrent neural network (, bidirectional LSTM) as the back-end for classification. In this work, we propose to replace the shallow 3D CNNs + deep 2D CNNs front-end with recent successful deep 3D CNNs --- two-stream (, grayscale video and optical flow streams) I3D. We evaluate different combinations of front-end and back-end modules with the grayscale video and optical flow inputs on the LRW dataset. The experiments show that, compared to the shallow 3D CNNs + deep 2D CNNs front-end, the deep 3D CNNs front-end with pre-training on the large-scale image and video datasets (, ImageNet and Kinetics) can improve the classification accuracy. On the other hand, we demonstrate that using the optical flow input alone can achieve comparable performance as using the grayscale video as input. Moreover, the two-stream network using both the grayscale video and optical flow inputs can further improve the performance. Overall, our two-stream I3D front-end with a Bi-LSTM back-end results in an absolute improvement of 5.3 over the previous art. | Existing methods to approach word-level lipreading by using visual information alone can be mostly split into two categories. The first category method, shown in the Figure (left), is mainly composed of two separate stages involving the feature extraction from the mouth region and classification using the sequence model, in which the process in two stages are independent. Variants in this category are different in having a different data pre-processing procedure or using different feature extractors and classifiers. @cite_5 proposes a DCT coefficient selection procedure, which prefers the higher-order vertical components of the DCT coefficients, to achieve better performance when the variety of the hand pose is large in the dataset. @cite_43 proposes a new cascaded feature extraction process including not only the DCT features but also a linear discriminant data projection and a maximum likelihood-based data rotation. @cite_55 proposes a PCA-based method to reduce the dimension of the DCT-based features. | {
"cite_N": [
"@cite_5",
"@cite_55",
"@cite_43"
],
"mid": [
"1982883756",
"2104263160",
"1604074770"
],
"abstract": [
"For the first time in this paper we present results showing the effect of speaker head pose angle on automatic lip-reading performance over a wide range of closely spaced angles. We analyse the effect head pose has upon the features themselves and show that by selecting coefficients with minimum variance w.r.t. pose angle, recognition performance can be improved when train-test pose angles differ. Experiments are conducted using the initial phase of a unique multi view Audio-Visual database designed specifically for research and development of pose-invariant lip-reading systems. We firstly show that it is the higher order horizontal spatial frequency components that become most detrimental as the pose deviates. Secondly we assess the performance of different feature selection masks across a range of pose angles including a new mask based on Minimum Cross-Pose Variance coefficients. We report a relative improvement of 50 in Word Error Rate when using our selection mask over a common energy based selection during profile view lip-reading.",
"The integration of audio and visual information improves speech recognition performance, specially in the presence of noise. In these circumstances it is necessary to introduce audio and visual weights to control the contribution of each modality to the recognition task. We present a method to set the value of the weights associated to each stream according to their reliability for speech recognition, allowing them to change with time and adapt to different noise and working conditions. Our dynamic weights are derived from several measures of the stream reliability, some specific to speech processing and others inherent to any classification task, and take into account the special role of silence detection in the definition of audio and visual weights. In this paper, we propose a new confidence measure, compare it to existing ones, and point out the importance of the correct detection of silence utterances in the definition of the weighting system. Experimental results support our main contribution: the inclusion of a voice activity detector in the weighting scheme improves speech recognition over different system architectures and confidence measures, leading to an increase in performance more relevant than any difference between the proposed confidence measures.",
"We propose a three-stage pixel-based visual front end for automatic speechreading (lipreading) that results in significantly improved recognition performance of spoken words or phonemes. The proposed algorithm is a cascade of three transforms applied on a three-dimensional video region-of-interest that contains the speaker's mouth area. The first stage is a typical image compression transform that achieves a high-energy, reduced-dimensionality representation of the video data. The second stage is a linear discriminant analysis-based data projection, which is applied on a concatenation of a small amount of consecutive image transformed video data. The third stage is a data rotation by means of a maximum likelihood linear transform that optimizes the likelihood of the observed data under the assumption of their class-conditional multivariate normal distribution with diagonal covariance. We applied the algorithm to visual-only 52-class phonetic and 27-class visemic classification on a 162-subject, 8-hour long, large vocabulary, continuous speech audio-visual database. We demonstrated significant classification accuracy gains by each added stage of the proposed algorithm which, when combined, can achieve up to 27 improvement. Overall, we achieved a 60 (49 ) visual-only frame-level visemic classification accuracy with (without) use of test set viseme boundaries. In addition, we report improved audio-visual phonetic classification over the use of a single-stage image transform visual front end. Finally, we discuss preliminary speech recognition results."
]
} |
1905.02417 | 2945927023 | Generative Adversarial Networks (GANs) are a powerful class of generative models. Despite their successes, the most appropriate choice of a GAN network architecture is still not well understood. GAN models for image synthesis have adopted a deep convolutional network architecture, which eliminates or minimizes the use of fully connected and pooling layers in favor of convolution layers in the generator and discriminator of GANs. In this paper, we demonstrate that a convolution network architecture utilizing deep fully connected layers and pooling layers can be more effective than the traditional convolution-only architecture, and we propose FCC-GAN, a fully connected and convolutional GAN architecture. Models based on our FCC-GAN architecture learn both faster than the conventional architecture and also generate higher quality of samples. We demonstrate the effectiveness and stability of our approach across four popular image datasets. | GANs were first formulated in @cite_5 , which demonstrated their potential as a generative model. GANs became popular for image synthesis based on successful use of deep convolution layers @cite_20 @cite_3 . The generator in the standard model maps a single noise vector to an output distribution @cite_5 @cite_20 . Additional information in the form of latent codes can be combined with the noise vector to control output attributes and improve the sample quality @cite_15 @cite_27 @cite_22 @cite_16 . For example, class labels combined with noise vectors as input to the generator can generate supervised data @cite_15 . Side-information such as image captions and bounding box localization can also be combined with class information to improve the quality of images @cite_6 @cite_13 . Maximizing mutual information between the input latent variables and the GAN outputs can produce a disentangled and interpretable representation @cite_22 . In semi-supervised GAN models, the discriminator is trained to predict the correct label for each real sample, in addition to discriminating real and fake data @cite_27 @cite_19 @cite_1 . Such models produce better image quality than their unsupervised counterparts @cite_19 @cite_27 . | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_20"
],
"mid": [
"2434741482",
"2412510955",
"2951523806",
"2423557781",
"2432004435",
"2950776302",
"2099471712",
"2125389028",
"2178768799",
"2530372461",
"2173520492"
],
"abstract": [
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.",
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1905.02417 | 2945927023 | Generative Adversarial Networks (GANs) are a powerful class of generative models. Despite their successes, the most appropriate choice of a GAN network architecture is still not well understood. GAN models for image synthesis have adopted a deep convolutional network architecture, which eliminates or minimizes the use of fully connected and pooling layers in favor of convolution layers in the generator and discriminator of GANs. In this paper, we demonstrate that a convolution network architecture utilizing deep fully connected layers and pooling layers can be more effective than the traditional convolution-only architecture, and we propose FCC-GAN, a fully connected and convolutional GAN architecture. Models based on our FCC-GAN architecture learn both faster than the conventional architecture and also generate higher quality of samples. We demonstrate the effectiveness and stability of our approach across four popular image datasets. | Optimizing the standard GAN objective function was shown to be similar to minimizing the Jensen Shannon (JS) divergence @cite_5 between the real distribution and generator's distribution. Moreover, the traditional optimization process of GANs is a special case of more general variational divergence estimation @cite_12 . Minimizing the JS divergence of two distributions, which have non-overlapping support can result in vanishing gradients hurting the training of GANs @cite_9 . A loss function based on Wasserstein distance (WGAN) was introduced to improve stability @cite_9 . WGAN training was further improved by penalizing the gradients in the loss function @cite_28 and adding a consistency term @cite_24 . GANs can also be trained using auto-encoder based discriminators @cite_14 , various types of f-divergences @cite_0 , and other objective functions such as Loss-Sensitive GAN @cite_18 and Least Squares GAN @cite_23 . Weight normalization in the discriminator and large batch sizes has been shown to be very useful in high resolution image synthesis. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_28",
"@cite_9",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_12"
],
"mid": [
"2580360036",
"2521028896",
"2962879692",
"",
"2790871512",
"2963800509",
"2593414223",
"2099471712",
"2166944917"
],
"abstract": [
"In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.",
"We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"",
"Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by arjovsky2017towards , who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the 1-Lipschitz continuity of the discriminator. In this paper, we propose a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning methods. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the first that exceeds the accuracy of 90 on the CIFAR-10 dataset using only 4,000 labeled images, to the best of our knowledge.",
"Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a nonasymptotic variational characterization of f -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations."
]
} |
1905.02373 | 2949268757 | Bundle adjustment (BA) is a fundamental optimization technique used in many crucial applications, including 3D scene reconstruction, robotic localization, camera calibration, autonomous driving, space exploration, street view map generation etc. Essentially, BA is a joint non-linear optimization problem, and one which can consume a significant amount of time and power, especially for large optimization problems. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. In this paper we propose -BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC that exploits custom hardware for higher performance and power efficiency. Specifically, based on our key observation that not all points appear on all images in a BA problem, we designed and implemented a Co-Observation Optimization technique to accelerate BA operations with optimized usage of memory and computation resources. Experimental results confirm that -BA outperforms the existing software implementations in terms of performance and power consumption. | Parallel processing using multicore, either on CPU or GPU, can be applied to optimize BA performance. Wu presented multicore solutions to the problem of bundle adjustment that run on currently available CPUs and GPUs @cite_1 . The authors concluded that using multicore systems deliver a 10x to 30x boost in speed over existing systems while reducing the amount of memory used. This was achieved by carefully restructuring the matrix vector product used in the PCG iterations into easily parallelizable operations. This restructuring also opens the door to a matrix free implementation which leads to substantial reductions in the memory consumption as well as execution time. The authors also showed that single precision arithmetic when combined with appropriate normalization gives numerical performance comparable to double precision based solvers while further reducing the memory and time cost. The resulting system enabled running the largest bundle adjustment problems to date on a single GPU. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2001790138"
],
"abstract": [
"We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http: grail.cs. washington.edu projects mcba."
]
} |
1905.02373 | 2949268757 | Bundle adjustment (BA) is a fundamental optimization technique used in many crucial applications, including 3D scene reconstruction, robotic localization, camera calibration, autonomous driving, space exploration, street view map generation etc. Essentially, BA is a joint non-linear optimization problem, and one which can consume a significant amount of time and power, especially for large optimization problems. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. In this paper we propose -BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC that exploits custom hardware for higher performance and power efficiency. Specifically, based on our key observation that not all points appear on all images in a BA problem, we designed and implemented a Co-Observation Optimization technique to accelerate BA operations with optimized usage of memory and computation resources. Experimental results confirm that -BA outperforms the existing software implementations in terms of performance and power consumption. | Distributed computing is another effective way to optimize BA performance. Eriksson proposed a consensus framework to deal with large scale bundle adjustment in distributed system @cite_10 . Instead of merging small problems by the optimization of overlapping regions of small problems, the consensus framework utilizes the proximal splitting method to formulate the bundle adjustment problem, in which the small problems are merged by averaging points in fact, decreasing the cost of merging. The merging process for the same parameters guarantees the consensus of points in different nodes. This design may suffer from several problems. Firstly, in each iteration, each node in the distributed system has to broadcast all overlapping points to the master node to complete the merging process, which is a huge overhead for large scale data-sets. Secondly, parameters of each camera are independent of parameters of other cameras. However, in practice, some cameras may share the same intrinsic parameters. Thirdly, the method by merging points converges a little slowly in very large scale data-sets and may converge in a local minimum early. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2461079165"
],
"abstract": [
"In this paper we study large-scale optimization problems in multi-view geometry, in particular the Bundle Adjustment problem. In its conventional formulation, the complexity of existing solvers scale poorly with problem size, hence this component of the Structure-from-Motion pipeline can quickly become a bottle-neck. Here we present a novel formulation for solving bundle adjustment in a truly distributed manner using consensus based optimization methods. Our algorithm is presented with a concise derivation based on proximal splitting, along with a theoretical proof of convergence and brief discussions on complexity and implementation. Experiments on a number of real image datasets convincingly demonstrates the potential of the proposed method by outperforming the conventional bundle adjustment formulation by orders of magnitude."
]
} |
1905.02373 | 2949268757 | Bundle adjustment (BA) is a fundamental optimization technique used in many crucial applications, including 3D scene reconstruction, robotic localization, camera calibration, autonomous driving, space exploration, street view map generation etc. Essentially, BA is a joint non-linear optimization problem, and one which can consume a significant amount of time and power, especially for large optimization problems. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. In this paper we propose -BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC that exploits custom hardware for higher performance and power efficiency. Specifically, based on our key observation that not all points appear on all images in a BA problem, we designed and implemented a Co-Observation Optimization technique to accelerate BA operations with optimized usage of memory and computation resources. Experimental results confirm that -BA outperforms the existing software implementations in terms of performance and power consumption. | Similarly, Zhang proposed a distributed approach for very large scale global bundle adjustment computation @cite_4 . The proposed distributed formulation was derived from the classical optimization algorithm alternating direction method of multipliers, based on the global camera consensus. The authors analyzed the conditions under which the convergence of this distributed optimization would be guaranteed and they adopted over-relaxation and self-adaption schemes to improve the convergence rate. Also, the authors proposed to split the large scale camera-point visibility graph in order to reduce the communication overheads of the distributed computing. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2803297720"
],
"abstract": [
"The increasing scale of Structure-from-Motion is fundamentally limited by the conventional optimization framework for the all-in-one global bundle adjustment. In this paper, we propose a distributed approach to coping with this global bundle adjustment for very large scale Structure-from-Motion computation. First, we derive the distributed formulation from the classical optimization algorithm ADMM, Alternating Direction Method of Multipliers, based on the global camera consensus. Then, we analyze the conditions under which the convergence of this distributed optimization would be guaranteed. In particular, we adopt over-relaxation and self-adaption schemes to improve the convergence rate. After that, we propose to split the large scale camera-point visibility graph in order to reduce the communication overheads of the distributed computing. The experiments on both public large scale SfM data-sets and our very large scale aerial photo sets demonstrate that the proposed distributed method clearly outperforms the state-of-the-art method in efficiency and accuracy."
]
} |
1905.02341 | 2943993476 | Neural architecture search (NAS) is proposed to automate the architecture design process and attracts overwhelming interest from both academia and industry. However, it is confronted with overfitting issue due to the high-dimensional search space composed by @math selection and @math connection of each layer. This paper analyzes the overfitting issue from a novel perspective, which separates the primitives of search space into architecture-overfitting related and parameter-overfitting related elements. The @math of each layer, which mainly contributes to parameter-overfitting and is important for model acceleration, is selected as our optimization target based on state-of-the-art architecture, meanwhile @math which related to architecture-overfitting, is ignored. With the largely reduced search space, our proposed method is both quick to converge and practical to use in various tasks. Extensive experiments have demonstrated that the proposed method can achieve fascinated results, including classification, face recognition etc. | Based on pre-defined search space that consists of the candidate operations and skip connections, NAS methods aim to find an optimal combination. The initial definition of operations is fine elements like ReLU activations, convolution layer, batch normalization, etc., which leads to huge search space so as to make NAS unpractical @cite_5 . For reducing search space, @cite_49 and @cite_8 combine fine elements to build higher-level operations based on the hand-designed architectures like Conv-BN-ReLU, Depthwise convolution, etc. and obtain impressive performance. Meanwhile @cite_9 @cite_20 manually define some skip connection rules like recursive way etc. to constrain the connection degree of freedom. The attention of researcher turns from search space to topology @cite_30 @cite_29 . Although the academic performance keep improved, the searched topologies may still suffer from overfitting as NAS still does not yet work in many practical tasks like face recognition. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_49",
"@cite_5",
"@cite_20"
],
"mid": [
"2767002384",
"2964081807",
"2910554758",
"2905672847",
"2796265726",
"2619307294",
""
],
"abstract": [
"We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6 on CIFAR-10 and 20.3 when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3 less top-1 accuracy on CIFAR-10 and 0.1 less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.",
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.",
"Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining.",
"We propose Stochastic Neural Architecture Search (SNAS), an economical end-to-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of back-propagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes less epochs to find a cell architecture with state-of-the-art accuracy than non-differentiable evolution-based and reinforcement-learning-based NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.",
"We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system.",
""
]
} |
1905.02538 | 2944221070 | Demosaicing, denoising and super-resolution (SR) are of practical importance in digital image processing and have been studied independently in the passed decades. Despite the recent improvement of learning-based image processing methods in image quality, there lacks enough analysis into their interactions and characteristics under a realistic setting of the mixture problem of demosaicing, denoising and SR. In existing solutions, these tasks are simply combined to obtain a high-resolution image from a low-resolution raw mosaic image, resulting in a performance drop of the final image quality. In this paper, we first rethink the mixture problem from a holistic perspective and then propose the Trinity Enhancement Network (TENet), a specially designed learning-based method for the mixture problem, which adopts a novel image processing pipeline order and a joint learning strategy. In order to obtain the correct color sampling for training, we also contribute a new dataset namely PixelShift200, which consists of high-quality full color sampled real-world images using the advanced pixel shift technique. Experiments demonstrate that our TENet is superior to existing solutions in both quantitative and qualitative perspective. Our experiments also show the necessity of the proposed PixelShift200 dataset. | Image demosaicing is an ill-posed problem of interpolating full-resolution color images from the color mosaic images (e.g. Bayer mosaic images), and is usually preformed in the beginning of image processing pipeline. Existing approaches can be mainly classified into two categories: model-based and learning-based methods. Model-based approaches @cite_41 @cite_13 @cite_47 @cite_18 @cite_46 @cite_19 focus on the construction of mathematical models and image priors in the spatial-spectral domain facilitating the recovery of missing data. Learning-based approaches @cite_19 @cite_32 build the process mapping by learning from abundant training data. Recently, deep learning has also used successfully for image demosaicing and achieved competitive performance @cite_4 @cite_39 @cite_23 @cite_53 . Micha "el @cite_23 train a deep convolutional neural network (CNN) on millions of carefully selected image patches and achieve the state-of-the-art performance of demosaicking. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_41",
"@cite_46",
"@cite_53",
"@cite_32",
"@cite_39",
"@cite_19",
"@cite_23",
"@cite_47",
"@cite_13"
],
"mid": [
"2140341478",
"",
"",
"2164734076",
"2787700404",
"",
"2035555662",
"1996284443",
"",
"",
"2152178471"
],
"abstract": [
"Demosaicing is a process of obtaining a full-color image by interpolating the missing colors of an image captured from a single sensor color filter array. This paper provides an effective and low-complexity iterative demosaicing algorithm applying a weighted-edge interpolation to handle green pixels followed by a series of color-difference interpolation to update red, blue, and green pixels. Based on our experiments of images, we enable the algorithm a well-designed stopping condition and pre-determine the proper weights of interpolation. Experimental results show that the proposed method performs much better than three state-of-the-art demosaicing techniques in terms of both computational cost and image quality. In comparison to the algorithm of successive approximation, the algorithm proposed here reduces mean squared error up to 14.5 while requiring computational cost only 22 on average. That is, it takes less time but performs better.",
"",
"",
"A cost-effective digital camera uses a single-image sensor, applying alternating patterns of red, green, and blue color filters to each pixel location. A way to reconstruct a full three-color representation of color images by estimating the missing pixel components in each color plane is called a demosaicing algorithm. This paper presents three inherent problems often associated with demosaicing algorithms that incorporate two-dimensional (2-D) directional interpolation: misguidance color artifacts, interpolation color artifacts, and aliasing. The level of misguidance color artifacts present in two images can be compared using metric neighborhood modeling. The proposed demosaicing algorithm estimates missing pixels by interpolating in the direction with fewer color artifacts. The aliasing problem is addressed by applying filterbank techniques to 2-D directional interpolation. The interpolation artifacts are reduced using a nonlinear iterative procedure. Experimental results using digital images confirm the effectiveness of this approach.",
"This paper presents a comprehensive study of applying the convolutional neural network (CNN) to solving the demosaicing problem. The paper presents two CNN models that learn end-to-end mappings between the mosaic samples and the original image patches with full information. In the case the Bayer color filter array (CFA) is used, an evaluation with ten competitive methods on popular benchmarks confirms that the data-driven, automatically learned features by the CNN models are very effective. Experiments show that the proposed CNN models can perform equally well in both the sRGB space and the linear space. It is also demonstrated that the CNN model can perform joint denoising and demosaicing. The CNN model is very flexible and can be easily adopted for demosaicing with any CFA design. We train CNN models for demosaicing with three different CFAs and obtain better results than existing methods. With the great flexibility to be coupled with any CFA, we present the first data-driven joint optimization of the CFA design and the demosaicing method using CNN. Experiments show that the combination of the automatically discovered CFA pattern and the automatically devised demosaicing method significantly outperforms the current best demosaicing results. Visual comparisons confirm that the proposed methods reduce more visual artifacts than existing methods. Finally, we show that the CNN model is also effective for the more general demosaicing problem with spatially varying exposure and color and can be used for taking images of higher dynamic ranges with a single shot. The proposed models and the thorough experiments together demonstrate that CNN is an effective and versatile tool for solving the demosaicing problem.",
"",
"In this paper we present a color interpolation technique based on artificial neural networks for a single-chip CCD (charge-coupled device) camera with a Bayer color filter array (CFA). Single-chip digital cameras use a color filter array and an interpolation method in order to produce high quality color images from sparsely sampled images. We have applied 3-layer feedforward neural networks in order to interpolate a missing pixel from surrounding pixels. And we compare the proposed method with conventional interpolation methods such as the bilinear interpolation method and cubic spline interpolation method. Experiments show that the proposed interpolation algorithm based on neural networks provides a better performance than the conventional interpolation algorithms.",
"Most digital cameras capture one primary color at each pixel by a single sensor overlaid with a color filter array. To recover a full color image from incomplete color samples, one needs to restore the two missing color values for each pixel. This restoration process is known as color demosaicking. In this paper, we present a novel self-learning approach to this problem via support vector regression. Unlike prior learning-based demosaicking methods, our approach aims at extracting image-dependent information in constructing the learning model, and we do not require any additional training data. Experimental results show that our proposed method outperforms many state-of-the-art techniques in both subjective and objective image quality measures.",
"",
"",
"Digital cameras sample scenes using a color filter array of mosaic pattern (e.g., the Bayer pattern). The demosaicking of the color samples is critical to the image quality. This paper presents a new color demosaicking technique of optimal directional filtering of the green-red and green-blue difference signals. Under the assumption that the primary difference signals (PDS) between the green and red blue channels are low pass, the missing green samples are adaptively estimated in both horizontal and vertical directions by the linear minimum mean square-error estimation (LMMSE) technique. These directional estimates are then optimally fused to further improve the green estimates. Finally, guided by the demosaicked full-resolution green channel, the other two color channels are reconstructed from the LMMSE filtered and fused PDS. The experimental results show that the presented color demosaicking technique outperforms the existing methods both in PSNR measure and visual perception."
]
} |
1905.02538 | 2944221070 | Demosaicing, denoising and super-resolution (SR) are of practical importance in digital image processing and have been studied independently in the passed decades. Despite the recent improvement of learning-based image processing methods in image quality, there lacks enough analysis into their interactions and characteristics under a realistic setting of the mixture problem of demosaicing, denoising and SR. In existing solutions, these tasks are simply combined to obtain a high-resolution image from a low-resolution raw mosaic image, resulting in a performance drop of the final image quality. In this paper, we first rethink the mixture problem from a holistic perspective and then propose the Trinity Enhancement Network (TENet), a specially designed learning-based method for the mixture problem, which adopts a novel image processing pipeline order and a joint learning strategy. In order to obtain the correct color sampling for training, we also contribute a new dataset namely PixelShift200, which consists of high-quality full color sampled real-world images using the advanced pixel shift technique. Experiments demonstrate that our TENet is superior to existing solutions in both quantitative and qualitative perspective. Our experiments also show the necessity of the proposed PixelShift200 dataset. | Image noise is inevitable during imaging and it may heavily degrade the visual quality. In past decades, plenty of methods have been proposed for denoising not only for color images but also mosaic images. Early methods such as anisotropic diffusion @cite_48 , total variation denoising @cite_42 and wavelet coring @cite_34 use hand-craft features and algorithms to recover a clean signal from noisy input. However these parametric methods have limited capacity and expressiveness. Advanced methods usually exploit effective image priors such as self-similarity @cite_25 @cite_43 @cite_14 and sparse representation @cite_7 . With the increasing of interests to learning-based methods, in recent years, most successful denoising algorithms are entirely data-driven, consisting of CNNs trained to recover from noisy images to noise-free images @cite_15 @cite_36 @cite_35 @cite_29 @cite_45 @cite_23 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_48",
"@cite_29",
"@cite_42",
"@cite_43",
"@cite_45",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_25"
],
"mid": [
"2764207251",
"2097073572",
"2160547390",
"2508457857",
"2150134853",
"",
"",
"",
"2952982376",
"",
"2037642501",
"2149925139",
""
],
"abstract": [
"Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.",
"We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data",
"The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.",
"A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >",
"",
"",
"",
"Most of existing image denoising methods assume the corrupted noise to be additive white Gaussian noise (AWGN). However, the realistic noise in real-world noisy images is much more complex than AWGN, and is hard to be modelled by simple analytical distributions. As a result, many state-of-the-art denoising methods in literature become much less effective when applied to real-world noisy images captured by CCD or CMOS cameras. In this paper, we develop a trilateral weighted sparse coding (TWSC) scheme for robust real-world image denoising. Specifically, we introduce three weight matrices into the data and regularisation terms of the sparse coding framework to characterise the statistics of realistic noise and image priors. TWSC can be reformulated as a linear equality-constrained problem and can be solved by the alternating direction method of multipliers. The existence and uniqueness of the solution and convergence of the proposed algorithm are analysed. Extensive experiments demonstrate that the proposed TWSC scheme outperforms state-of-the-art denoising methods on removing realistic noise.",
"",
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.",
"The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point statistics; these statistics capture image properties that elude Fourier-based techniques. We develop a Bayesian estimator that is a natural extension of the Wiener solution, and that exploits these higher-order statistics. The resulting nonlinear estimator performs a \"coring\" operation. We provide a simple model for the subband statistics, and use it to develop a semi-blind noise removal algorithm based on a steerable wavelet pyramid.",
""
]
} |
1905.02538 | 2944221070 | Demosaicing, denoising and super-resolution (SR) are of practical importance in digital image processing and have been studied independently in the passed decades. Despite the recent improvement of learning-based image processing methods in image quality, there lacks enough analysis into their interactions and characteristics under a realistic setting of the mixture problem of demosaicing, denoising and SR. In existing solutions, these tasks are simply combined to obtain a high-resolution image from a low-resolution raw mosaic image, resulting in a performance drop of the final image quality. In this paper, we first rethink the mixture problem from a holistic perspective and then propose the Trinity Enhancement Network (TENet), a specially designed learning-based method for the mixture problem, which adopts a novel image processing pipeline order and a joint learning strategy. In order to obtain the correct color sampling for training, we also contribute a new dataset namely PixelShift200, which consists of high-quality full color sampled real-world images using the advanced pixel shift technique. Experiments demonstrate that our TENet is superior to existing solutions in both quantitative and qualitative perspective. Our experiments also show the necessity of the proposed PixelShift200 dataset. | SR aims to recover the high-resolution (HR) image from its low-resolution (LR) version. Since the seminal work of employing CNN for SR @cite_11 , various deep learning based methods with different network architectures @cite_9 @cite_31 @cite_17 @cite_24 @cite_30 @cite_6 @cite_51 and training strategies @cite_22 @cite_33 have been proposed to continuously improve the SR performance. However, problems occur when apply such algorithm in real-world applications. When SR algorithms enhance the image details and texture, the unexpected noise, blurry and artifacts are also magnified. If the input image is noisy or blurry, the problems that were not serious will be magnified, especially for artifacts and noise caused by previous processing. It may lead to unsatisfactory results when apply SR separately after demosaicking or denoising. An example is shown in . | {
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_22",
"@cite_33",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_31",
"@cite_51",
"@cite_17"
],
"mid": [
"2214802144",
"",
"",
"2952773607",
"2950016100",
"",
"",
"2951997238",
"",
"2788343277"
],
"abstract": [
"We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.",
"",
"",
"The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .",
"As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors.",
"",
"",
"We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.",
"",
"A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods."
]
} |
1905.02538 | 2944221070 | Demosaicing, denoising and super-resolution (SR) are of practical importance in digital image processing and have been studied independently in the passed decades. Despite the recent improvement of learning-based image processing methods in image quality, there lacks enough analysis into their interactions and characteristics under a realistic setting of the mixture problem of demosaicing, denoising and SR. In existing solutions, these tasks are simply combined to obtain a high-resolution image from a low-resolution raw mosaic image, resulting in a performance drop of the final image quality. In this paper, we first rethink the mixture problem from a holistic perspective and then propose the Trinity Enhancement Network (TENet), a specially designed learning-based method for the mixture problem, which adopts a novel image processing pipeline order and a joint learning strategy. In order to obtain the correct color sampling for training, we also contribute a new dataset namely PixelShift200, which consists of high-quality full color sampled real-world images using the advanced pixel shift technique. Experiments demonstrate that our TENet is superior to existing solutions in both quantitative and qualitative perspective. Our experiments also show the necessity of the proposed PixelShift200 dataset. | In practical applications, in addition to the above well defined problems, more common is the mixture problem of multiple image defects. For example, the mixture problem of SR and denoising @cite_49 , demosaicing and denoising @cite_26 @cite_1 @cite_28 , and the problem of SR and demosaicing @cite_8 @cite_10 @cite_37 . For the mixture problem of multiple tasks, the difficulty of solving is greatly increased. Yu @cite_3 study the order of execution of tasks in the mixture problem and use reinforcement learning to learn the order of execution of the task. More relevant to this work, Micha "el @cite_23 train a CNN to jointly perform these tasks and achieve the state of art performance. Zhang @cite_49 propose a SR network to jointly perform SR and denoising, as the denoising pre-precessing step tends to lose detail information and would deteriorate the subsequent SR performance. Zhou @cite_37 introduce deep residual network for joint demosaicking and super-resolution. However, the mixture problem of demosaicking, denoising and SR has not witnessed the usage of jointly perform strategy to the best of our knowledge. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_3",
"@cite_49",
"@cite_23",
"@cite_10"
],
"mid": [
"",
"",
"2079359870",
"2437754452",
"",
"2797519004",
"2964277374",
"",
"1988739356"
],
"abstract": [
"",
"",
"ABSTRACT In the last two decades, two related categories of problems have been studied independently in the image restora-tion literature: super-resolution and demosaicing. A closer look at these problems reveals the relation betweenthem, and as conventional color digital cameras suer from both low-spatial resolution and color ltering, it isreasonable to address them in a unied context. In this paper, we propose a fast and robust hybrid method ofsuper-resolution and demosaicing, based on a maximum a posteriori (MAP) estimation technique by minimizinga multi-term cost function. The L 1 norm is used for measuring the dierence between the projected estimate ofthe high-resolution image and each low-resolution image, removing outliers in the data and errors due to possiblyinaccurate motion estimation. Bilateral regularization is used for regularizing the luminance component, result-ing in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonovregularization is used to smooth the chrominance component. Finally, an additional regularization term is usedto force similar edge orientation in dierent color channels. We show that the minimization of the total costfunction is relatively easy and fast. Experimental results on synthetic and real data sets conrm the eectivenessof our method.Keywords: Super-Resolution, Demosaicing, Robust Estimation, Robust Regularization, Color Enhancement",
"Demosaicing is an important first step for color image acquisition. For practical reasons, demosaicing algorithms have to be both efficient and yield high quality results in the presence of noise. The demosaicing problem poses several challenges, e.g. zippering and false color artifacts as well as edge blur. In this work, we introduce a novel learning based method that can overcome these challenges. We formulate demosaicing as an image restoration problem and propose to learn efficient regularization inspired by a variational energy minimization framework that can be trained for different sensor layouts. Our algorithm performs joint demosaicing and denoising in close relation to the real physical mosaicing process on a camera sensor. This is achieved by learning a sequence of energy minimization problems composed of a set of RGB filters and corresponding activation functions. We evaluate our algorithm on the Microsoft Demosaicing data set in terms of peak signal to noise ratio (PSNR) and structured similarity index (SSIM). Our algorithm is highly efficient both in image quality and run time. We achieve an improvement of up to 2.6 dB over recent state-of-the-art algorithms.",
"",
"We investigate a novel approach for image restoration by reinforcement learning. Unlike existing studies that mostly train a single large network for a specialized task, we prepare a toolbox consisting of small-scale convolutional networks of different complexities and specialized in different tasks. Our method, RL-Restore, then learns a policy to select appropriate tools from the toolbox to progressively restore the quality of a corrupted image. We formulate a step-wise reward function proportional to how well the image is restored at each step to learn the action policy. We also devise a joint learning scheme to train the agent and tools for better performance in handling uncertainty. In comparison to conventional human-designed networks, RL-Restore is capable of restoring images corrupted with complex and unknown distortions in a more parameter-efficient manner using the dynamically formed toolchain.",
"Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.",
"",
"We present a new algorithm that performs demosaicing and super-resolution jointly from a set of raw images sampled with a color filter array. Such a combined approach allows us to compute the alignment parameters between the images on the raw camera data before interpolation artifacts are introduced. After image registration, a high resolution color image is reconstructed at once using the full set of images. For this, we use normalized convolution, an image interpolation method from a nonuniform set of samples. Our algorithm is tested and compared to other approaches in simulations and practical experiments."
]
} |
1905.02448 | 2944726758 | Users of cloud computing are increasingly overwhelmed with the wide range of providers and services offered by each provider. As such, many users select cloud services based on description alone. An emerging alternative is to use a decision support system (DSS), which typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment or redeployment of cloud applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is not sustainable as it incurs additional time and cost to collect training data and subsequently train the models. We overcome this through developing a Transfer Learning (TL) approach where the knowledge (in the form of the prediction model and associated data set) gained from running an application on a particular cloud infrastructure is transferred in order to substantially reduce the overhead of building new models for the performance of new applications and or cloud infrastructures. In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of 60 reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenarios. | A big challenge in cloud computing stems from the wide variety of technologies, APIs, and terminologies used @cite_16 . Furthermore, uncertainty associated with how these services are managed ( scheduling algorithms, load balancing policies, co-location strategies, @cite_39 @cite_10 @cite_14 ) add a black-box effect to this complexity. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_10",
"@cite_39"
],
"mid": [
"1651603302",
"2340753868",
"2270218991",
""
],
"abstract": [
"Benchmarking the performance of public cloud providers is a common research topic. Previous work has already extensively evaluated the performance of different cloud platforms for different use cases, and under different constraints and experiment setups. In this article, we present a principled, large-scale literature review to collect and codify existing research regarding the predictability of performance in public Infrastructure-as-a-Service (IaaS) clouds. We formulate 15 hypotheses relating to the nature of performance variations in IaaS systems, to the factors of influence of performance variations, and how to compare different instance types. In a second step, we conduct extensive real-life experimentation on four cloud providers to empirically validate those hypotheses. We show that there are substantial differences between providers. Hardware heterogeneity is today less prevalent than reported in earlier research, while multitenancy has a dramatic impact on performance and predictability, but only for some cloud providers. We were unable to discover a clear impact of the time of the day or the day of the week on cloud performance.",
"Recent years have seen significant growth in the cloud computing market, both in terms of provider competition (including private offerings) and customer adoption. However, the cloud computing world still lacks adopted standard programming interfaces, which has a knock-on effect on the costs associated with interoperability and severely limits the flexibility and portability of applications and virtual infrastructures. This has brought about an increasing number of cross-cloud architectures, i.e. systems that span across cloud provisioning boundaries. This paper condenses discussions from the CrossCloud event series to outline the types of cross-cloud systems and their associated design decisions, and laments challenges and opportunities they create.",
"Decision making in cloud environments is quite challenging due to the diversity in service offerings and pricing models, especially considering that the cloud market is an incredibly fast moving one. In addition, there are no hard and fast rules; each customer has a specific set of constraints (e.g. budget) and application requirements (e.g. minimum computational resources). Machine learning can help address some of the complicated decisions by carrying out customer-specific analytics to determine the most suitable instance type(s) and the most opportune time for starting or migrating instances. We employ machine learning techniques to develop an adaptive deployment policy, providing an optimal match between the customer demands and the available cloud service offerings. We provide an experimental study based on extensive set of job executions over a major public cloud infrastructure.",
""
]
} |
1905.02448 | 2944726758 | Users of cloud computing are increasingly overwhelmed with the wide range of providers and services offered by each provider. As such, many users select cloud services based on description alone. An emerging alternative is to use a decision support system (DSS), which typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment or redeployment of cloud applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is not sustainable as it incurs additional time and cost to collect training data and subsequently train the models. We overcome this through developing a Transfer Learning (TL) approach where the knowledge (in the form of the prediction model and associated data set) gained from running an application on a particular cloud infrastructure is transferred in order to substantially reduce the overhead of building new models for the performance of new applications and or cloud infrastructures. In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of 60 reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenarios. | rely on representing cloud resources and their capabilities in a certain way, such as using standardized KPIs ( @cite_2 @cite_9 @cite_46 @cite_27 ) or through benchmarking ( @cite_37 @cite_43 ). The former method results, despite all efforts, in an outdated and reductive representation due to the sheer breadth and proliferation of the cloud computing market. The latter method avoids this through persistent benchmarking in an attempt to capture irregularities and attain a detailed and up-to-date performance profile for each different cloud resource type. This of course comes at a high operational cost. Moreover, a disadvantage of both methods is that they are based on application-agnostic ranking and not on knowing how the application will perform on a given infrastructure. | {
"cite_N": [
"@cite_37",
"@cite_9",
"@cite_43",
"@cite_27",
"@cite_2",
"@cite_46"
],
"mid": [
"2121884932",
"2284310338",
"1964078557",
"2005865975",
"2090072113",
"2013326640"
],
"abstract": [
"While many public cloud providers offer pay-as-you-go computing, their varying approaches to infrastructure, virtualization, and software services lead to a problem of plenty. To help customers pick a cloud that fits their needs, we develop CloudCmp, a systematic comparator of the performance and cost of cloud providers. CloudCmp measures the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. CloudCmp strives to ensure fairness, representativeness, and compliance of these measurements while limiting measurement cost. Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, we find that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection. From case studies on three representative cloud applications, we show that CloudCmp can guide customers in selecting the best-performing provider for their applications.",
"Cloud computing is an upcoming and promising solution for utility computing that provides resources on demand. As it has grown into a business model, a large number of cloud service providers exist today in the cloud market, which further is expanding exponentially. Many cloud service providers, with almost similar functionality, pose a selection problem to the cloud users. To assist the users in the best service selection, as per its requirement, a framework has been developed in which users list their quality of service QoS expectation, while service providers express their offerings. Experience of the existing cloud users is also taken into account in order to select the best cloud service provider. This work identifies some new QoS metrics, besides few existing ones, and defines it in a way that eases both the user and the provider to express their expectations and offers, respectively, in a quantified manner. Further, a dynamic and flexible model, using a variant of ranked voting method, is proposed that considers users' requirement and suggests the best cloud service provider. Case studies affirm the correctness and the effectiveness of the proposed model. Copyright © 2016 John Wiley & Sons, Ltd.",
"How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.",
"We present fundamental challenges for scalable and dependable service platforms and architectures that enable flexible and dynamic provisioning of cloud services. Our findings are incorporated in a toolkit targeting the cloud service and infrastructure providers. The innovations behind the toolkit are aimed at optimizing the whole service life cycle, including service construction, deployment, and operation, on a basis of aspects such as trust, risk, eco-efficiency and cost. Notably, adaptive self-preservation is crucial to meet predicted and unforeseen changes in resource requirements. By addressing the whole service life cycle, taking into account several cloud architectures, and by taking a holistic approach to sustainable service provisioning, the toolkit aims to provide a foundation for a reliable, sustainable, and trustful cloud computing industry.",
"Cloud computing is revolutionizing the IT industry by enabling them to offer access to their infrastructure and application services on a subscription basis. As a result, several enterprises including IBM, Microsoft, Google, and Amazon have started to offer different Cloud services to their customers. Due to the vast diversity in the available Cloud services, from the customer's point of view, it has become difficult to decide whose services they should use and what is the basis for their selection. Currently, there is no framework that can allow customers to evaluate Cloud offerings and rank them based on their ability to meet the user's Quality of Service (QoS) requirements. In this work, we propose a framework and a mechanism that measure the quality and prioritize Cloud services. Such a framework can make a significant impact and will create healthy competition among Cloud providers to satisfy their Service Level Agreement (SLA) and improve their QoS. We have shown the applicability of the ranking framework using a case study.",
"This paper introduces a cloud broker service (STRATOS) which facilitates the deployment and runtime management of cloud application topologies using cloud elements services sourced on the fly from multiple providers, based on requirements specified in higher level objectives. Its implementation and use is evaluated in a set of experiments."
]
} |
1905.02448 | 2944726758 | Users of cloud computing are increasingly overwhelmed with the wide range of providers and services offered by each provider. As such, many users select cloud services based on description alone. An emerging alternative is to use a decision support system (DSS), which typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment or redeployment of cloud applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is not sustainable as it incurs additional time and cost to collect training data and subsequently train the models. We overcome this through developing a Transfer Learning (TL) approach where the knowledge (in the form of the prediction model and associated data set) gained from running an application on a particular cloud infrastructure is transferred in order to substantially reduce the overhead of building new models for the performance of new applications and or cloud infrastructures. In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of 60 reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenarios. | focus more on the other part of the matchmaking decision process: application requirements. Examples include vendor-independent ontologies ( @cite_41 @cite_48 ) and model-driven engineering ( @cite_47 @cite_18 @cite_24 ). These solutions are heavily dependent on fine-grained information from domain experts, analysts and decision makers to get complete knowledge of business models and company strategies @cite_30 . As such, a designer must be aware of the impact of decisions, alternative decisions, actor interactions, dependencies, and processes while designing workflows and architectural models. Such processes require significant developer experience and time to follow domain-specific design principles. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_41",
"@cite_48",
"@cite_24",
"@cite_47"
],
"mid": [
"2774325183",
"2072540379",
"1932378147",
"112960182",
"2954087123",
""
],
"abstract": [
"Multi-tenancy is sharing a single application's resources to serve more than a single group of users (i.e. tenant). Cloud application providers are encouraged to adopt multi-tenancy as it facilitates increased resource utilization and ease of maintenance, translating into lower operational and energy costs. However, introducing multi-tenancy to a single-tenant application requires significant changes in its structure to ensure tenant isolation, configurability and extensibility. In this paper, we analyse and address the different challenges associated with evolving an application's architecture to a multi-tenant cloud deployment. We focus specifically on multi-tenant data architectures, commonly the prime candidate for consolidation and multi-tenancy. We present a Domain-Specific Modeling language (DSML) to model a multi-tenant data architecture, and automatically generate source code that handles the evolution of the application's data layer. We apply the DSML on a representative case study of a single-tenant application evolving to become a multi-tenant cloud application under two resource sharing scenarios. We evaluate the costs associated with using this DSML against the state of the art and against manual evolution, reporting specifically on the gained benefits in terms of development effort and reliability.",
"Cloud computing has leveraged new software development and provisioning approaches by changing the way computing, storage and networking resources are purchased and consumed. The variety of cloud offerings on both technical and business level has considerably advanced the development process and established new business models and value chains for applications and services. However, the modernization and cloudification of legacy software so as to be offered as a service still encounters many challenges. In this work, we present a complete methodology and a methodology instantiation framework for the effective migration of legacy software to modern cloud environments.",
"The greatest challenge beyond trust and security for the long-term adoption of cloud computing is the interoperability between clouds. In the context of world-wide tremendous activities against the vendor lock-in and lack of integration of cloud computing services, keeping track of the new concepts and approaches is also a challenge. We considered useful to provide in this paper a snapshot of these concepts and approaches followed by a proposal of their classification. A new approach in providing cloud portability is also revealed.",
"Cloud Platform as a Service (PaaS) is a novel, rapidly growing segment in the Cloud computing market. However, the diversity and heterogeneity of today’s existing PaaS offerings raises several interoperability challenges. This introduces adoption barriers due to the lock-in issues that prevent the portability of data and applications from one PaaS to another, “locking” software developers to the first provider they use. This paper introduces the Cloud4SOA solution, a scalable approach to semantically interconnect heterogeneous PaaS offerings across different Cloud providers that share the same technology. The design of the Cloud4SOA solution, extensively presented in this work, comprises of a set of interlinked collaborating software components and models to provide developers and platform providers with a number of core capabilities: matchmaking, management, monitoring and migration of applications. The paper concludes with the presentation of a proof-of-concept implementation of the Cloud4SOA system based on real-life business scenarios.",
"Multi-tenancy is used for efficient resource utilization when cloud resources are shared across multiple customers. In cloud applications, the data layer is often the prime candidate for multi-tenancy, and usually comprises a combination of different cloud storage solutions such as relational and non-relational databases, and blob storage. Each of these storage types is different, requiring its own partitioning schemes to ensure tenant isolation and scalability. Current multi-tenant data architectures are implemented mainly through manual coding techniques that tend to be time consuming and error prone. As an alternative, we propose a domain-specific modeling language, CadaML, that provides concepts and notations to model a multi-tenant data architecture in an abstract way. CadaML also provides tools to validate the data architecture and automatically produce application code to implement said architecture.",
""
]
} |
1905.02448 | 2944726758 | Users of cloud computing are increasingly overwhelmed with the wide range of providers and services offered by each provider. As such, many users select cloud services based on description alone. An emerging alternative is to use a decision support system (DSS), which typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment or redeployment of cloud applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is not sustainable as it incurs additional time and cost to collect training data and subsequently train the models. We overcome this through developing a Transfer Learning (TL) approach where the knowledge (in the form of the prediction model and associated data set) gained from running an application on a particular cloud infrastructure is transferred in order to substantially reduce the overhead of building new models for the performance of new applications and or cloud infrastructures. In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of 60 reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenarios. | Several recent works used ML to look into the variation of auctioned cloud resources (as opposed to on-demand ones), namely AWS Spot, @cite_33 @cite_22 @cite_13 . However, this has been proven to be a relatively trivial optimization problem @cite_40 and is not of interest to our work. | {
"cite_N": [
"@cite_40",
"@cite_22",
"@cite_13",
"@cite_33"
],
"mid": [
"2409274745",
"2082819362",
"2605895369",
"2253280018"
],
"abstract": [
"Cloud providers have begun to allow users to bid for surplus servers on a spot market. These servers are allocated if a user's bid price is higher than their market price and revoked otherwise. Thus, analyzing price data to derive optimal bidding strategies has become a popular research topic. In this paper, we argue that sophisticated bidding strategies, in practice, do not provide any advantages over simple strategies for multiple reasons. First, due to price characteristics, there are a wide range of bid prices that yield the optimal cost and availability. Second, given the large number of spot markets, there is always a market with available surplus resources. Thus, if resources become unavailable due to a price spike, users need not wait until the spike subsides, but can instead provision a new spot resource elsewhere and migrate to it. Third, current spot market rules enable users to place maximum bids for resources without any penalty. Given bidding's irrelevance, users can adopt trivial bidding strategies and focus instead on modifying applications to efficiently seek out and migrate to the lowest cost resources.",
"Amazon's Elastic Compute Cloud (EC2) uses auction-based spot pricing to sell spare capacity, allowing users to bid for cloud resources at a highly reduced rate. Amazon sets the spot price dynamically and accepts user bids above this price. Jobs with lower bids (including those already running) are interrupted and must wait for a lower spot price before resuming. Spot pricing thus raises two basic questions: how might the provider set the price, and what prices should users bid? Computing users' bidding strategies is particularly challenging: higher bid prices reduce the probability of, and thus extra time to recover from, interruptions, but may increase users' cost. We address these questions in three steps: (1) modeling the cloud provider's setting of the spot price and matching the model to historically offered prices, (2) deriving optimal bidding strategies for different job requirements and interruption overheads, and (3) adapting these strategies to MapReduce jobs with master and slave nodes having different interruption overheads. We run our strategies on EC2 for a variety of job sizes and instance types, showing that spot pricing reduces user cost by 90 with a modest increase in completion time compared to on-demand pricing.",
"Many cost-conscious public cloud workloads (\"tenants\") are turning to Amazon EC2's spot instances because, on average, these instances offer significantly lower prices (up to 10 times lower) than on-demand and reserved instances of comparable advertized resource capacities. To use spot instances effectively, a tenant must carefully weigh the lower costs of these instances against their poorer availability. Towards this, we empirically study four features of EC2 spot instance operation that a cost-conscious tenant may find useful to model. Using extensive evaluation based on both historical and current spot instance data, we show shortcomings in the state-of-the-art modeling of these features that we overcome. Our analysis reveals many novel properties of spot instance operation some of which offer predictive value while others do not. Using these insights, we design predictors for our features that offer a balance between computational efficiency (allowing for online resource procurement) and cost-efficacy. We explore \"case studies\" wherein we implement prototypes of dynamic spot instance procurement advised by our predictors for two types of workloads. Compared to the state-of-the-art, our approach achieves (i) comparable cost but much better performance (fewer bid failures) for a latency-sensitive in-memory Memcached cache, and (ii) an additional 18 cost-savings with comparable (if not better than) performance for a delay-tolerant batch workload.",
"We study a stylized revenue maximization problem for a provider of cloud computing services, where the service provider (SP) operates an infinite capacity system in a market with heterogeneous customers with respect to their valuation and congestion sensitivity. The SP offers two service options: one with guaranteed service availability, and one where users bid for resource availability and only the “winning” bids at any point in time get access to the service. We show that even though capacity is unlimited, in several settings, depending on the relation between valuation and congestion sensitivity, the revenue maximizing service provider will choose to make the spot service option stochastically unavailable. This form of intentional service degradation is optimal in settings where user valuation per unit time increases sub-linearly with respect to their congestion sensitivity (i.e., their disutility per unit time when the service is unavailable) – this is a form of “damaged goods.” We provide some data evidence based on the analysis of price traces from the biggest cloud service provider, Amazon Web Services."
]
} |
1905.02424 | 2943918691 | In the Equal-Subset-Sum problem, we are given a set @math of @math integers and the problem is to decide if there exist two disjoint nonempty subsets @math , whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in @math time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give @math worst case Monte Carlo algorithm. This answers the open problem from Woeginger's inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in @math time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in @math time and polynomial space. | considered parametrized by the maximum bin size @math and obtained algorithm running in time @math . Subsequently, showed that one can get a faster algorithm for than meet-in-the-middle if @math or @math . MM: derivation of their ideas?: KW done In this paper, we use the hash function that is based on their ideas. Moreover, the ideas in @cite_20 @cite_8 were used in the recent breakthrough polynomial space algorithm @cite_25 running in @math time. | {
"cite_N": [
"@cite_25",
"@cite_20",
"@cite_8"
],
"mid": [
"2626471773",
"1847405718",
"2963053963"
],
"abstract": [
"We present randomized algorithms that solve Subset Sum and Knapsack instances with n items in O*(20.86n) time, where the O*(·) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve Binary Linear Programming on n variables with few constraints in a similar running time. We also show that for any constant k≥ 2, random instances of k-Sum can be solved using O(nk-0.5(n)) time and O(logn) space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(logn) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.",
"We study the exact time complexity of the Subset Sum problem. Our focus is on instances that lack additive structure in the sense that the sums one can form from the subsets of the given integers a ...",
"The SUBSET SUM problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^ n 2 )-time algorithm for SUBSET SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a \"truly faster\", O^*(2^ (0.5-delta)*n )-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta = 2^ 0.661n . Consequently, we also obtain a characterization in terms of the popular density parameter n log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory."
]
} |
1905.02424 | 2943918691 | In the Equal-Subset-Sum problem, we are given a set @math of @math integers and the problem is to decide if there exist two disjoint nonempty subsets @math , whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in @math time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give @math worst case Monte Carlo algorithm. This answers the open problem from Woeginger's inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in @math time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in @math time and polynomial space. | From the pseudopolynomial algorithms perspective Knapsack and admit @math algorithm, where @math is a value of a target. Recently, for the pseudopolynomial algorithm was improved to run in deterministic @math time by and randomized @math time by (and simplified, see @cite_2 @cite_16 ). However, these algorithms have a drawback of running in pseudopolynomial space @math . Surprisingly, presented an algorithm running in time @math and space @math which was later improved to @math time and @math space assuming the Extended Riemann Hypothesis @cite_21 . | {
"cite_N": [
"@cite_16",
"@cite_21",
"@cite_2"
],
"mid": [
"2883642199",
"2537252502",
"2885641822"
],
"abstract": [
"Subset Sum is a classical optimization problem taught to undergraduates as an example of an NP-hard problem, which is amenable to dynamic programming, yielding polynomial running time if the input numbers are relatively small. Formally, given a set @math of @math positive integers and a target integer @math , the Subset Sum problem is to decide if there is a subset of @math that sums up to @math . Dynamic programming yields an algorithm with running time @math . Recently, the authors [SODA '17] improved the running time to @math , and it was further improved to @math by a somewhat involved randomized algorithm by Bringmann [SODA '17], where @math hides polylogarithmic factors. Here, we present a new and significantly simpler algorithm with running time @math . While not the fastest, we believe the new algorithm and analysis are simple enough to be presented in an algorithms class, as a striking example of a divide-and-conquer algorithm that uses FFT to a problem that seems (at first) unrelated. In particular, the algorithm and its analysis can be described in full detail in two pages (see pages 3-5).",
"Given a set Z of n positive integers and a target value t, the S ubset S um problem asks whether any subset of Z sums to t. A textbook pseudopolynomial time algorithm by Bellman from 1957 solves S ubset S um in time O(nt). This has been improved to O(n max Z) by Pisinger [J. Algorithms'99] and recently to [EQUATION] by Koiliaris and Xu [SODA'17]. Here we present a simple and elegant randomized algorithm running in time O(n+t). This improves upon a classic algorithm and is likely to be near-optimal, since it matches conditional lower bounds from S et C over and k-C lique . We then use our new algorithm and additional tricks to improve the best known polynomial space solution from time O(n3t) and space O(n2) to time O(nt) and space O(n log t), assuming the Extended Riemann Hypothesis. Unconditionally, we obtain time O(nt1+e) and space O(nte) for any constant e > 0.",
"Given a multiset S of n positive integers and a target integer t, the Subset Sum problem asks to determine whether there exists a subset of S that sums up to t. The current best deterministic algorithm, by Koiliaris and Xu [SODA'17], runs in O (sqrt n t) time, where O hides poly-logarithm factors. Bringmann [SODA'17] later gave a randomized O (n + t) time algorithm using two-stage color-coding. The O (n+t) running time is believed to be near-optimal. In this paper, we present a simple and elegant randomized algorithm for Subset Sum in O (n + t) time. Our new algorithm actually solves its counting version modulo prime p>t, by manipulating generating functions using FFT."
]
} |
1905.02424 | 2943918691 | In the Equal-Subset-Sum problem, we are given a set @math of @math integers and the problem is to decide if there exist two disjoint nonempty subsets @math , whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in @math time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give @math worst case Monte Carlo algorithm. This answers the open problem from Woeginger's inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in @math time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in @math time and polynomial space. | In 1978 Knapsack problems were introduced into cryptography by . They introduced a Knapsack based public key cryptosystem. Subsequently, their scheme was broken by using lattice reduction @cite_12 . After that, many knapsack cryptosystems were broken with low-density attacks @cite_23 @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_23",
"@cite_12"
],
"mid": [
"1968182591",
"2150780437",
"2012632515"
],
"abstract": [
"The general subset sum problem is NP-complete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find short non-zero vectors in special lattices. The Lagarias-Odlyzko algorithm would solve almost all subset sum problems of density<0.6463 ... in polynomial time if it could invoke a polynomial-time algorithm for finding the shortest non-zero vector in a lattice. This paper presents two modifications of that algorithm, either one of which would solve almost all problems of density<0.9408 ... if it could find shortest non-zero vectors in lattices. These modifications also yield dramatic improvements in practice when they are combined with known lattice basis reduction algorithms.",
"The subset sum problem is to decide whether or not the 0-l integer programming problem S n i=l a i x i = M , ∀I , x I = 0 or 1, has a solution, where the a i and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n log 2 (max i a i ). Then for “almost all” problems of density d d n , it is proved that the lattice basis reduction algorithm locates v. Extensive computational tests of the algorithm suggest that it works for densities d d c ( n ), where d c ( n ) is a cutoff value that is substantially larger than 1 n . This method gives a polynomial time attack on knapsack public-key cryptosystems that can be expected to break them if they transmit information at rates below d c ( n ), as n → ∞.",
"The Merkle-Hellman cryptosystem is one of the two major public-key cryptosystems proposed so far. It is shown that the basic variant of this cryptosystem, in which the elements of the public key are modular multiples of a superincreasing sequence, is breakable in polynomial time."
]
} |
1905.02424 | 2943918691 | In the Equal-Subset-Sum problem, we are given a set @math of @math integers and the problem is to decide if there exist two disjoint nonempty subsets @math , whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in @math time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give @math worst case Monte Carlo algorithm. This answers the open problem from Woeginger's inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in @math time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in @math time and polynomial space. | and showed @math @cite_22 . The first nontrivial upper bound on @math was @math (for sufficiently large @math ) @cite_1 . Subsequently, proved that @math and showed @math . offered 500 dollars for proof or disproof of conjecture that @math for some constant @math . | {
"cite_N": [
"@cite_1",
"@cite_22"
],
"mid": [
"1970957",
"1663749032"
],
"abstract": [
"A motion picture film which includes a row of sprocket holes along one marginal edge portion and a longitudinally extending area carrying a photo-sensitive layer for photographing a selected scene, or series of scenes in a succession of stationary frames, the film being characterized in that one of its marginal edge portions carries a latent-image numerical identification of each of the successive frames in binary code form adjacent each said frame. A post-production film-making system is also described with means for transferring the binary numbers from the film to a magnetic medium and means for reading and visually displaying in arabic form the numbers on the film and the magnetic medium.",
"Publisher Summary This chapter presents a survey of problems in combinatorial number theory. The chapter discusses problems connected with Van der Waerden's and Szemeredi's theorem. Thus the chapter gives details only if there is some important new development. According to Alfred Brauer, Schur conjectured more than 50 years ago that divided the integers into two classes at least one of them contains arbitrarily long arithmetic progressions."
]
} |
1905.02424 | 2943918691 | In the Equal-Subset-Sum problem, we are given a set @math of @math integers and the problem is to decide if there exist two disjoint nonempty subsets @math , whose elements sum up to the same value. The problem is NP-complete. The state-of-the-art algorithm runs in @math time and is based on the meet-in-the-middle technique. In this paper, we improve upon this algorithm and give @math worst case Monte Carlo algorithm. This answers the open problem from Woeginger's inspirational survey. Additionally, we analyse the polynomial space algorithm for Equal-Subset-Sum. A naive polynomial space algorithm for Equal-Subset-Sum runs in @math time. With read-only access to the exponentially many random bits, we show a randomized algorithm running in @math time and polynomial space. | has some connections to the study of the structure of DNA molecules @cite_15 @cite_19 @cite_26 . considered @math - , in which we need to find @math disjoint subsets of a given set with the same sum. They obtained several algorithms that depend on certain restrictions of the sets (e.g., small cardinality of a solution). In the following work, considered other variants of and proved their NP-hardness. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_26"
],
"mid": [
"1598605111",
"1865467296",
"2102000201"
],
"abstract": [
"The EQUAL SUM SUBSETS problem, where we are given a set of positive integers and we ask for two nonempty disjoint subsets such that their elements add up to the same total, is known to be NP-hard. In this paper we give (pseudo-)polynomial algorithms and or (strong) NP-hardness proofs for several natural variations of EQUAL SUM SUBSETS. Among others we present (i) a framework for obtaining NP-hardness proofs and pseudopolynomial time algorithms for EQUAL SUM SUBSETS variations, which we apply to variants of the problem with additional selection restrictions, (ii) a proof of NP-hardness and a pseudo-polynomial time algorithm for the case where we ask for two subsets such that the ratio of their sums is some fixed rational r > 0, (iii) a pseudo-polynomial time algorithm for finding k subsets of equal sum, with k = O(1), and a proof of strong NP-hardness for the same problem with k = Ω(n), (iv) algorithms and hardness results for finding k equal sum subsets under the additional requirement that the subsets should be of equal cardinality. Our results are a step towards determining the dividing lines between polynomial time solvability, pseudo-polynomial time solvability, and strong NP-completeness of subset-sum related problems.",
"The problem to find the coordinates of n points on a line such that the pairwise distances of the points form a given multi-set of (n 2 ) distances is known as Partial Digest problem, which occurs for instance in DNA physical mapping and de novo sequencing of proteins. Although Partial Digest was – as a combinatorial problem – already proposed in the 1930’s, its computational complexity is still unknown.",
""
]
} |
1905.02163 | 2944041982 | Combining CNN with CRF for modeling dependencies between pixel labels is a popular research direction. This task is far from trivial, especially if end-to-end training is desired. In this paper, we propose a novel simple approach to CNN+CRF combination. In particular, we propose to simulate a CRF regularizer with a trainable module that has standard CNN architecture. We call this module a CRF Simulator. We can automatically generate an unlimited amount of ground truth for training such CRF Simulator without any user interaction, provided we have an efficient algorithm for optimization of the actual CRF regularizer. After our CRF Simulator is trained, it can be directly incorporated as part of any larger CNN architecture, enabling a seamless end-to-end training. In particular, the other modules can learn parameters that are more attuned to the performance of the CRF Simulator module. We demonstrate the effectiveness of our approach on the task of salient object segmentation regularized with the standard binary CRF energy. In contrast to previous work we do not need to develop and implement the complex mechanics of optimizing a specific CRF as part of CNN. In fact, our approach can be easily extended to other CRF energies, including multi-label. To the best of our knowledge we are the first to study the question of whether the output of CNNs can have regularization properties of CRFs. | The work in @cite_22 was among the first to use a CRF for post-processing of the results obtained with CNN. In particular, they use CNNs as a multi-scale feature extractor over superpixels. They form a superpixel-based CRF, and obtain the final segmentation using the expansion algorithm from @cite_25 . The work in @cite_15 uses the Fully Convolutional Networks (FCN) from @cite_27 to learn the unary terms for a Gaussian edge full-CRF @cite_33 . Then mean field annealing is used for CRF inference as a post-processing step. While simple, these approaches do not support end-to-end training. The CRF parameters are not learned from the training data together with CNN parameters. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_27",
"@cite_15",
"@cite_25"
],
"mid": [
"2022508996",
"",
"2395611524",
"1923697677",
"2143516773"
],
"abstract": [
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30 relative improvement to 67.2 mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy."
]
} |
1905.02163 | 2944041982 | Combining CNN with CRF for modeling dependencies between pixel labels is a popular research direction. This task is far from trivial, especially if end-to-end training is desired. In this paper, we propose a novel simple approach to CNN+CRF combination. In particular, we propose to simulate a CRF regularizer with a trainable module that has standard CNN architecture. We call this module a CRF Simulator. We can automatically generate an unlimited amount of ground truth for training such CRF Simulator without any user interaction, provided we have an efficient algorithm for optimization of the actual CRF regularizer. After our CRF Simulator is trained, it can be directly incorporated as part of any larger CNN architecture, enabling a seamless end-to-end training. In particular, the other modules can learn parameters that are more attuned to the performance of the CRF Simulator module. We demonstrate the effectiveness of our approach on the task of salient object segmentation regularized with the standard binary CRF energy. In contrast to previous work we do not need to develop and implement the complex mechanics of optimizing a specific CRF as part of CNN. In fact, our approach can be easily extended to other CRF energies, including multi-label. To the best of our knowledge we are the first to study the question of whether the output of CNNs can have regularization properties of CRFs. | While powerful, the RNN-CNN approach requires hand-designing special architecture for each new CRFl one wishes to model. This is a very difficult task, and thus far, other than CRF model with mean field optimization, there are few other examples. The only other example we are aware of is the work of @cite_28 , that describe how to implement a submodular CRF layer within a CNN architecture. Their approach is technically difficult and is limited to submodular functions. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2751681791"
],
"abstract": [
"Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to use in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation."
]
} |
1905.02163 | 2944041982 | Combining CNN with CRF for modeling dependencies between pixel labels is a popular research direction. This task is far from trivial, especially if end-to-end training is desired. In this paper, we propose a novel simple approach to CNN+CRF combination. In particular, we propose to simulate a CRF regularizer with a trainable module that has standard CNN architecture. We call this module a CRF Simulator. We can automatically generate an unlimited amount of ground truth for training such CRF Simulator without any user interaction, provided we have an efficient algorithm for optimization of the actual CRF regularizer. After our CRF Simulator is trained, it can be directly incorporated as part of any larger CNN architecture, enabling a seamless end-to-end training. In particular, the other modules can learn parameters that are more attuned to the performance of the CRF Simulator module. We demonstrate the effectiveness of our approach on the task of salient object segmentation regularized with the standard binary CRF energy. In contrast to previous work we do not need to develop and implement the complex mechanics of optimizing a specific CRF as part of CNN. In fact, our approach can be easily extended to other CRF energies, including multi-label. To the best of our knowledge we are the first to study the question of whether the output of CNNs can have regularization properties of CRFs. | @cite_32 , they model MRF potentials as deep features using a structured learning framework. In this framework, computing exact gradient descent updates is computationally infeasible, and they resort to various approximations, such as using local belief functions instead of the true marginals. While theoretically interesting and amenable to end-to-end training, their approach relies on a number of approximations and is complex to implement. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2950774024"
],
"abstract": [
"Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains."
]
} |
1905.02163 | 2944041982 | Combining CNN with CRF for modeling dependencies between pixel labels is a popular research direction. This task is far from trivial, especially if end-to-end training is desired. In this paper, we propose a novel simple approach to CNN+CRF combination. In particular, we propose to simulate a CRF regularizer with a trainable module that has standard CNN architecture. We call this module a CRF Simulator. We can automatically generate an unlimited amount of ground truth for training such CRF Simulator without any user interaction, provided we have an efficient algorithm for optimization of the actual CRF regularizer. After our CRF Simulator is trained, it can be directly incorporated as part of any larger CNN architecture, enabling a seamless end-to-end training. In particular, the other modules can learn parameters that are more attuned to the performance of the CRF Simulator module. We demonstrate the effectiveness of our approach on the task of salient object segmentation regularized with the standard binary CRF energy. In contrast to previous work we do not need to develop and implement the complex mechanics of optimizing a specific CRF as part of CNN. In fact, our approach can be easily extended to other CRF energies, including multi-label. To the best of our knowledge we are the first to study the question of whether the output of CNNs can have regularization properties of CRFs. | @cite_26 , they also proposed a method for CRF-CNN training based on structured learning. Like @cite_32 , the main difficulty of this approach is efficiently back-propagating through the CRF module. For each specific CRF model, a new approach has to be developed from scratch. | {
"cite_N": [
"@cite_26",
"@cite_32"
],
"mid": [
"2559178909",
"2950774024"
],
"abstract": [
"We propose a novel and principled hybrid CNN+CRF model for stereo estimation. Our model allows to exploit the advantages of both, convolutional neural networks (CNNs) and conditional random fields (CRFs) in an unified approach. The CNNs compute expressive features for matching and distinctive color edges, which in turn are used to compute the unary and binary costs of the CRF. For inference, we apply a recently proposed highly parallel dual block descent algorithm which only needs a small fixed number of iterations to compute a high-quality approximate minimizer. As the main contribution of the paper, we propose a theoretically sound method based on the structured output support vector machine (SSVM) to train the hybrid CNN+CRF model on large-scale data end-to-end. Our trained models perform very well despite the fact that we are using shallow CNNs and do not apply any kind of post-processing to the final output of the CRF. We evaluate our combined models on challenging stereo benchmarks such as Middlebury 2014 and Kitti 2015 and also investigate the performance of each individual component.",
"Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains."
]
} |
1905.02163 | 2944041982 | Combining CNN with CRF for modeling dependencies between pixel labels is a popular research direction. This task is far from trivial, especially if end-to-end training is desired. In this paper, we propose a novel simple approach to CNN+CRF combination. In particular, we propose to simulate a CRF regularizer with a trainable module that has standard CNN architecture. We call this module a CRF Simulator. We can automatically generate an unlimited amount of ground truth for training such CRF Simulator without any user interaction, provided we have an efficient algorithm for optimization of the actual CRF regularizer. After our CRF Simulator is trained, it can be directly incorporated as part of any larger CNN architecture, enabling a seamless end-to-end training. In particular, the other modules can learn parameters that are more attuned to the performance of the CRF Simulator module. We demonstrate the effectiveness of our approach on the task of salient object segmentation regularized with the standard binary CRF energy. In contrast to previous work we do not need to develop and implement the complex mechanics of optimizing a specific CRF as part of CNN. In fact, our approach can be easily extended to other CRF energies, including multi-label. To the best of our knowledge we are the first to study the question of whether the output of CNNs can have regularization properties of CRFs. | An alternative approach is to incorporate regularization directly into the loss function. For example the approach in @cite_8 incorporates normalized cut regularization into the loss function for the problem of weakly supervised learning. However, incorporating length regularization into loss during fully supervised learning suprisingly leads to inferior performance compared with unregularized loss function Private communication with the authors , also confirmed by our experiments. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2963198662"
],
"abstract": [
"Most recent semantic segmentation methods train deep convolutional neural networks with fully annotated masks requiring pixel-accuracy for good quality training. Common weakly-supervised approaches generate full masks from partial input (e.g. scribbles or seeds) using standard interactive segmentation methods as preprocessing. But, errors in such masks result in poorer training since standard loss functions (e.g. cross-entropy) do not distinguish seeds from potentially mislabeled other pixels. Inspired by the general ideas in semi-supervised learning, we address these problems via a new principled loss function evaluating network output with criteria standard in \"shallow\" segmentation, e.g. normalized cut. Unlike prior work, the cross entropy part of our loss evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels. We focus on normalized cut loss where dense Gaussian kernel is efficiently implemented in linear time by fast Bilateral filtering. Our normalized cut loss approach to segmentation brings the quality of weakly-supervised training significantly closer to fully supervised methods."
]
} |
1905.02219 | 2953308912 | In this work, we describe practical lessons we have learned from successfully using contextual bandits (CBs) to improve key business metrics of the Microsoft Virtual Agent for customer support. While our current use cases focus on single step einforcement learning (RL) and mostly in the domain of natural language processing and information retrieval we believe many of our findings are generally applicable. Through this article, we highlight certain issues that RL practitioners may encounter in similar types of applications as well as offer practical solutions to these challenges. | While there exist many high-quality open source implementations of RL algorithms such as OpenAI baselines @cite_7 and dopamine @cite_3 , our focus is not on setups where the environment is a simulator, but rather the real world. The projects that are close to our goals are the Decision Service @cite_13 , Horizon @cite_8 , and RLlib @cite_10 . | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_10"
],
"mid": [
"2614208603",
"",
"2898621204",
"2905342215",
"2779040504"
],
"abstract": [
"Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Reinforcement-based learning algorithms such as contextual bandits can be very effective in these settings, but applying them in practice is fraught with technical debt, and no general system exists that supports them completely. We address this and create the first general system for contextual learning, called the Decision Service. Existing systems often suffer from technical debt that arises from issues like incorrect data collection and weak debuggability, issues we systematically address through our ML methodology and system abstractions. The Decision Service enables all aspects of contextual bandit learning using four system abstractions which connect together in a loop: explore (the decision space), log, learn, and deploy. Notably, our new explore and log abstractions ensure the system produces correct, unbiased data, which our learner uses for online learning and to enable real-time safeguards, all in a fully reproducible manner. The Decision Service has a simple user interface and works with a variety of applications: we present two live production deployments for content recommendation that achieved click-through improvements of 25-30 , another with 18 revenue lift in the landing page, and ongoing applications in tech support and machine failure handling. The service makes real-time decisions and learns continuously and scalably, while significantly lowering technical debt.",
"",
"In this paper we present Horizon, Facebook's open source applied reinforcement learning (RL) platform. Horizon is an end-to-end platform designed to solve industry applied RL problems where datasets are large (millions to billions of observations), the feedback loop is slow (vs. a simulator), and experiments must be done with care because they don't run in a simulator. Unlike other RL platforms, which are often designed for fast prototyping and experimentation, Horizon is designed with production use cases as top of mind. The platform contains workflows to train popular deep RL algorithms and includes data preprocessing, feature transformation, distributed training, counterfactual policy evaluation, optimized serving, and a model-based data understanding tool. We also showcase and describe real examples where reinforcement learning models trained with Horizon significantly outperformed and replaced supervised learning systems at Facebook.",
"Deep reinforcement learning (deep RL) research has grown significantly in recent years. A number of software offerings now exist that provide stable, comprehensive implementations for benchmarking. At the same time, recent deep RL research has become more diverse in its goals. In this paper we introduce Dopamine, a new research framework for deep RL that aims to support some of that diversity. Dopamine is open-source, TensorFlow-based, and provides compact and reliable implementations of some state-of-the-art deep RL agents. We complement this offering with a taxonomy of the different research objectives in deep RL research. While by no means exhaustive, our analysis highlights the heterogeneity of research in the field, and the value of frameworks such as ours.",
"Reinforcement learning (RL) algorithms involve the deep nesting of distinct components, where each component typically exhibits opportunities for distributed computation. Current RL libraries offer parallelism at the level of the entire program, coupling all the components together and making existing implementations difficult to extend, combine, and reuse. We argue for building composable RL components by encapsulating parallelism and resource requirements within individual components, which can be achieved by building on top of a flexible task-based programming model. We demonstrate this principle by building Ray RLLib on top of Ray and show that we can implement a wide range of state-of-the-art algorithms by composing and reusing a handful of standard components. This composability does not come at the cost of performance --- in our experiments, RLLib matches or exceeds the performance of highly optimized reference implementations. Ray RLLib is available as part of Ray at this https URL"
]
} |
1905.02250 | 2949372430 | Undetectable wireless transmissions are fundamental to avoid eavesdroppers. To address this issue, wireless steganography hides covert information inside primary information by slightly modifying the transmitted waveform such that primary information will still be decodable, while covert information will be seen as noise by agnostic receivers. Since the addition of covert information inevitably decreases the SNR of the primary transmission, key challenges in wireless steganography are: i) to assess the impact of the covert channel on the primary channel as a function of different channel conditions; and ii) to make sure that the covert channel is undetectable. Existing approaches are protocol-specific, also we notice that existing wireless technologies rely on phase-keying modulations that in most cases do not use the channel up to its Shannon capacity. Therefore, the residual capacity can be leveraged to implement a wireless system based on a pseudo-noise asymmetric shift keying (PN-ASK) modulation, where covert symbols are mapped by shifting the amplitude of primary symbols. This way, covert information will be undetectable, since a receiver expecting phase-modulated symbols will see their shift in amplitude as an effect of channel path loss degradation. We first investigate the SER of PN-ASK as a function of the channel; then, we find the optimal PN-ASK parameters that optimize primary and covert throughput under different channel condition. We evaluate the throughput performance and undetectability of PN-ASK through extensive simulations and on an experimental testbed based on USRP N210 software-defined radios. We show that PN-ASK improves the throughput by more than 8x with respect to prior art. Finally, we demonstrate through experiments that PN-ASK is able to transmit covert data on top of IEEE 802.11g frames, which are correctly decoded by an off-the-shelf laptop WiFi. | The application of steganography to design covert wireless communication systems has received some attention over the last few years @cite_21 @cite_4 @cite_3 @cite_18 @cite_19 . However, only few works have focused on the design of general-purpose, efficient and undetectable covert wireless communication systems. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_21",
"@cite_3",
"@cite_19"
],
"mid": [
"2127907890",
"1733711556",
"2031029123",
"1978627933",
""
],
"abstract": [
"In many applications, wireless sensor networks need to secure information. Actual researchs found efficient solutions for this kind of network, principally by using cryptography to secure the data transfer. However an encrypted information send by the network can be sufficient to preventan attacker, who eavesdrops the network, that somethingimportant has been detected. To avoid this situation, we propose another way to secure wireless sensor networks by using steganography, specifically by hiding data in the MAC layer of the 802.15.4 protocol. We show that this solution can be an energy-efficient way with a good latency to hide data in a wireless sensor network.",
"Network steganography is the art of hiding secret information within innocent network transmissions. Recent findings indicate that novel malware is increasingly using network steganography. Similarly, other malicious activities can profit from network steganography, such as data leakage or the exchange of pedophile data. This paper provides an introduction to network steganography and highlights its potential application for harmful purposes. We discuss the issues related to countering network steganography in practice and provide an outlook on further research directions and problems.",
"Methods for embedding secret data are more sophisticated than their ancient predecessors, but the basic principles remain unchanged.",
"The article discusses basic principles of network steganography, which is a comparatively new research subject in the area of information hiding, followed by a concise overview and classification of network steganographic methods and techniques.",
""
]
} |
1905.02250 | 2949372430 | Undetectable wireless transmissions are fundamental to avoid eavesdroppers. To address this issue, wireless steganography hides covert information inside primary information by slightly modifying the transmitted waveform such that primary information will still be decodable, while covert information will be seen as noise by agnostic receivers. Since the addition of covert information inevitably decreases the SNR of the primary transmission, key challenges in wireless steganography are: i) to assess the impact of the covert channel on the primary channel as a function of different channel conditions; and ii) to make sure that the covert channel is undetectable. Existing approaches are protocol-specific, also we notice that existing wireless technologies rely on phase-keying modulations that in most cases do not use the channel up to its Shannon capacity. Therefore, the residual capacity can be leveraged to implement a wireless system based on a pseudo-noise asymmetric shift keying (PN-ASK) modulation, where covert symbols are mapped by shifting the amplitude of primary symbols. This way, covert information will be undetectable, since a receiver expecting phase-modulated symbols will see their shift in amplitude as an effect of channel path loss degradation. We first investigate the SER of PN-ASK as a function of the channel; then, we find the optimal PN-ASK parameters that optimize primary and covert throughput under different channel condition. We evaluate the throughput performance and undetectability of PN-ASK through extensive simulations and on an experimental testbed based on USRP N210 software-defined radios. We show that PN-ASK improves the throughput by more than 8x with respect to prior art. Finally, we demonstrate through experiments that PN-ASK is able to transmit covert data on top of IEEE 802.11g frames, which are correctly decoded by an off-the-shelf laptop WiFi. | Classen analyze in @cite_13 different covert channels over IEEE 802.11 networks, and show that it is feasible to transmit covert information on top of redundant'' information such as the short and long training sequences. Similarly, the authors of @cite_9 , @cite_2 and @cite_16 encode covert information by leveraging, respectively, the cyclic prefix of OFDM symbols, the OFDM frame padding mechanisms and the redundancy introduced by error correction coding. Direct sequence spread spectrum (DSSS) steganography over IEEE 802.15.4 communications has been investigated in @cite_20 , where covert information is effectively transmitted by intentionally generating errors in the DSSS sequence. On the other hand, the evaluation is only theoretical and no experiments on a practical testbed were conducted. Power allocation over a set of subcarriers is used in @cite_15 to transmit covert data over AWGN channels. However, the authors conclude that such an approach achieves zero-rate transmission when a large number of subcarriers is considered. | {
"cite_N": [
"@cite_9",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_20"
],
"mid": [
"2022971840",
"2952236795",
"581334404",
"",
"789579324",
""
],
"abstract": [
"This paper presents a proposal of covert steganographic channels in high-speed IEEE 802.11n networks. The method is based on the modification of cyclic prefixes in OFDM (Orthogonal Frequency-Division Multiplexing) symbols. This proposal provides the highest hidden transmission known in the state of the art. This paper includes theoretical analysis and simulation results of the presented steganographic system performance. The simulation performance was compared with other known approaches in the literature.",
"This paper presents a new steganographic method called WiPad (Wireless Padding). It is based on the insertion of hidden data into the padding of frames at the physical layer of WLANs (Wireless Local Area Networks). A performance analysis based on a Markov model, previously introduced and validated by the authors in [10], is provided for the method in relation to the IEEE 802.11 a g standards. Its results prove that maximum steganographic bandwidth for WiPad is as high as 1.1 Mbit s for data frames and 0.44 Mbit s for acknowledgment (ACK) frames. To the authors' best knowledge this is the most capacious of all the known steganographic network channels.",
"Widely deployed encryption-based security prevents unauthorized decoding, but does not ensure undetectability of communication. However, covert, or low probability of detection intercept communication is crucial in many scenarios ranging from covert military operations and the organization of social unrest, to privacy protection for users of wireless networks. In addition, encrypted data or even just the transmission of a signal can arouse suspicion, and even the most theoretically robust encryption can often be defeated by a determined adversary using non-computational methods such as side-channel analysis. Various covert communication techniques have been developed to address these concerns, including steganography for finite-alphabet noiseless applications and spread-spectrum systems for wireless communications. After reviewing these covert communication systems, this article discusses new results on the fundamental limits of their capabilities, and provides a vision for the future of such systems as well.",
"",
"Wireless covert channels promise to exfiltrate information with high bandwidth by circumventing traditional access control mechanisms. Ideally, they are only accessible by the intended recipient and---for regular system users operators---indistinguishable from normal operation. While a number of theoretical and simulation studies exist in literature, the practical aspects of WiFi covert channels are not well understood. Yet, it is particularly the practical design and implementation aspect of wireless systems that provides attackers with the latitude to establish covert channels: the ability to operate under adverse conditions and to tolerate a high amount of signal variations. Moreover, covert physical receivers do not have to be addressed within wireless frames, but can simply eavesdrop on the transmission. In this work, we analyze the possibilities to establish covert channels in WiFi systems with emphasis on exploiting physical layer characteristics. We discuss design alternatives for selected covert channel approaches and study their feasibility in practice. By means of an extensive performance analysis, we compare the covert channel bandwidth. We further evaluate the possibility of revealing the introduced covert channels based on different detection capabilities.",
""
]
} |
1905.02250 | 2949372430 | Undetectable wireless transmissions are fundamental to avoid eavesdroppers. To address this issue, wireless steganography hides covert information inside primary information by slightly modifying the transmitted waveform such that primary information will still be decodable, while covert information will be seen as noise by agnostic receivers. Since the addition of covert information inevitably decreases the SNR of the primary transmission, key challenges in wireless steganography are: i) to assess the impact of the covert channel on the primary channel as a function of different channel conditions; and ii) to make sure that the covert channel is undetectable. Existing approaches are protocol-specific, also we notice that existing wireless technologies rely on phase-keying modulations that in most cases do not use the channel up to its Shannon capacity. Therefore, the residual capacity can be leveraged to implement a wireless system based on a pseudo-noise asymmetric shift keying (PN-ASK) modulation, where covert symbols are mapped by shifting the amplitude of primary symbols. This way, covert information will be undetectable, since a receiver expecting phase-modulated symbols will see their shift in amplitude as an effect of channel path loss degradation. We first investigate the SER of PN-ASK as a function of the channel; then, we find the optimal PN-ASK parameters that optimize primary and covert throughput under different channel condition. We evaluate the throughput performance and undetectability of PN-ASK through extensive simulations and on an experimental testbed based on USRP N210 software-defined radios. We show that PN-ASK improves the throughput by more than 8x with respect to prior art. Finally, we demonstrate through experiments that PN-ASK is able to transmit covert data on top of IEEE 802.11g frames, which are correctly decoded by an off-the-shelf laptop WiFi. | The closest work to ours is @cite_22 , where covert information is modulated onto WiFi QPSK primary symbol so that the symbols are seen as a dirty'' QPSK modulation at the receiver's side (see Fig. ). However, some design choices in @cite_22 make the proposed scheme less than fully general. First, the covert constellations will overlap in case of higher-order modulations (, 16-QPSK), which inevitably results in throughput loss in both primary and covert channels. Conversely, we encode covert information by of a primary symbol, which does not cause overlap in higher-order modulations. Furthermore, the authors do not offer any mathematical analysis of the proposed scheme. Finally, we show through experiments in Section that PN-ASK achieves 8x throughput of @cite_22 under the same conditions. | {
"cite_N": [
"@cite_22"
],
"mid": [
"1738992959"
],
"abstract": [
"In this paper we propose a novel approach to implement high capacity, covert channel by encoding covert information in the physical layer of common wireless communication protocols. We call our technique Dirty Constellation because we hide the covert messages within a \"dirty\" constellation that mimics noise commonly imposed by hardware imperfections and channel conditions. The cover traffic in this method is the baseband modulation constellation. We leverage the variability in the wireless channel and hardware conditions to encode the covert channel. Packet sharing techniques and pre-distortion of the modulated symbols of a decoy packet allows the transmission of a secondary covert message while making it statistically undetectable to an adversary. We demonstrate the technique by implementing it in hardware, on top of an 802.11a g PHY layer, using a software defined radio and analyze the undetectability of the scheme through a variety of common radio measurements and statistical tests."
]
} |
1905.02082 | 2942668087 | Mapping and localization are essential capabilities of robotic systems. Although the majority of mapping systems focus on static environments, the deployment in real-world situations requires them to handle dynamic objects. In this paper, we propose an approach for an RGB-D sensor that is able to consistently map scenes containing multiple dynamic elements. For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing, with most computations parallelized on a GPU. For detecting dynamics, we exploit the residuals obtained after an initial registration, together with the explicit modeling of free space in the model. We evaluate our approach on existing datasets, and provide a new dataset showing highly dynamic scenes. These experiments show that our approach often surpass other state-of-the-art dense SLAM methods. We make available our dataset with the ground truth for both the trajectory of the RGB-D sensor obtained by a motion capture system and the model of the static environment using a high-precision terrestrial laser scanner. Finally, we release our approach as open source code. | With the advent of inexpensive RGB-D cameras, many approaches for mapping using such sensors were proposed @cite_17 @cite_1 . The seminal paper of Newcombe al @cite_9 showed the prospects of TSDF-based RGB-D mapping by generating accurate, high detailed maps using only depth information. It paved the way for several improvements increasing the versatility and fidelity of RGB-D mapping. As the approach relies on a fixed voxel grid, volumes that can be mapped are limited. Subsequent approaches explore compression of this grid. For instance, Steinbr "ucker al @cite_16 use an Octree instead of a voxel grid. Nie ner al @cite_2 , on the other hand, propose to only allocate voxel blocks close to the mapped surface, and address them in constant time via hashing. K "ahler al @cite_10 extend the idea of voxel hashing by using a hierarchy of voxel blocks with different resolutions. To alleviate the need for raycasting for generating the model image for registration, Canelhas al @cite_3 and Bylow al @cite_18 propose to directly exploit the TSDF for evaluation of the residuals and computation of the jacobians within the error minimization. | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2243794092",
"1987648924",
"2805202215",
"1969291828",
"2071906076",
"2013345945",
"2275738541",
""
],
"abstract": [
"The ability to quickly acquire 3D models is an essential capability needed in many disciplines including robotics, computer vision, geodesy, and architecture. In this paper we present a novel method for real-time camera tracking and 3D reconstruction of static indoor environments using an RGB-D sensor. We show that by representing the geometry with a signed distance function (SDF), the camera pose can be efficiently estimated by directly minimizing the error of the depth images on the SDF. As the SDF contains the distances to the surface for each voxel, the pose optimization can be carried out extremely fast. By iteratively estimating the camera poses and integrating the RGB-D data in the voxel grid, a detailed reconstruction of an indoor environment can be achieved. We present reconstructions of several rooms using a hand-held sensor and from onboard an autonomous quadrocopter. Our extensive evaluation on publicly available benchmark data shows that our approach is more accurate and robust than the iterated closest point algorithm (ICP) used by KinectFusion, and yields often a comparable accuracy at much higher speed to feature-based bundle adjustment methods such as RGB-D SLAM for up to medium-sized scenes.",
"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.",
"",
"Ego-motion estimation and environment mapping are two recurring problems in the field of robotics. In this work we propose a simple on-line method for tracking the pose of a depth camera in six degrees of freedom and simultaneously maintaining an updated 3D map, represented as a truncated signed distance function. The distance function representation implicitly encodes surfaces in 3D-space and is used directly to define a cost function for accurate registration of new data. The proposed algorithm is highly parallel and achieves good accuracy compared to state of the art methods. It is suitable for reconstructing single household items, workspace environments and small rooms at near real-time rates, making it practical for use on modern CPU hardware.",
"Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.",
"Many modern 3D reconstruction methods accumulate information volumetrically using truncated signed distance functions. While this usually imposes a regular grid with fixed voxel size, not all parts of a scene necessarily need to be represented at the same level of detail. For example, a flat table needs less detail than a highly structured keyboard on it. We introduce a novel representation for the volumetric 3D data that uses hash functions rather than trees for accessing individual blocks of the scene, but which still provides different resolution levels. We show that our data structure provides efficient access and manipulation functions that can be very well parallelised, and also describe an automatic way of choosing appropriate resolutions for different parts of the scene. We embed the novel representation in a system for simultaneous localization and mapping from RGB-D imagery and also investigate the implications of the irregular grid on interpolation routines. Finally, we evaluate our system in experiments, demonstrating state-of-the-art representation accuracy at typical frame-rates around 100 Hz, along with 40 memory savings.",
""
]
} |
1905.02082 | 2942668087 | Mapping and localization are essential capabilities of robotic systems. Although the majority of mapping systems focus on static environments, the deployment in real-world situations requires them to handle dynamic objects. In this paper, we propose an approach for an RGB-D sensor that is able to consistently map scenes containing multiple dynamic elements. For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing, with most computations parallelized on a GPU. For detecting dynamics, we exploit the residuals obtained after an initial registration, together with the explicit modeling of free space in the model. We evaluate our approach on existing datasets, and provide a new dataset showing highly dynamic scenes. These experiments show that our approach often surpass other state-of-the-art dense SLAM methods. We make available our dataset with the ground truth for both the trajectory of the RGB-D sensor obtained by a motion capture system and the model of the static environment using a high-precision terrestrial laser scanner. Finally, we release our approach as open source code. | Besides using a TSDF, another popular representation of the model are surfels, which are disks with a normal and a radius. Keller al @cite_14 use surfels to represent the model of the environment. Whelan al @cite_15 extend the approach with a deformation graph, which allows for long-range corrections of the map via loop closures. | {
"cite_N": [
"@cite_15",
"@cite_14"
],
"mid": [
"2250172176",
"2065906272"
],
"abstract": [
"We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.",
"Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects scenes. Or handle larger scenes by trading real-time performance and or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-Based representation, which directly works with the input acquired from range depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations, i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales."
]
} |
1905.02082 | 2942668087 | Mapping and localization are essential capabilities of robotic systems. Although the majority of mapping systems focus on static environments, the deployment in real-world situations requires them to handle dynamic objects. In this paper, we propose an approach for an RGB-D sensor that is able to consistently map scenes containing multiple dynamic elements. For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing, with most computations parallelized on a GPU. For detecting dynamics, we exploit the residuals obtained after an initial registration, together with the explicit modeling of free space in the model. We evaluate our approach on existing datasets, and provide a new dataset showing highly dynamic scenes. These experiments show that our approach often surpass other state-of-the-art dense SLAM methods. We make available our dataset with the ground truth for both the trajectory of the RGB-D sensor obtained by a motion capture system and the model of the static environment using a high-precision terrestrial laser scanner. Finally, we release our approach as open source code. | An alternative approach was proposed by Della Corte al @cite_5 . Their approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. They integrate multiple cues such as depth, color and normal information in a unified way. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2963463264"
],
"abstract": [
"The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RGBD as well as 3D LIDAR data. In contrast to popular point cloud registration approaches such as ICP our method does not rely on explicit data association and exploits multiple modalities such as raw range and image data streams. Color, depth, and normal information are handled in an uniform manner and the registration is obtained by minimizing the pixel-wise difference between two multi-channel images. We developed a flexible and general framework and implemented our approach inside that framework. We also released our implementation as open source C++ code. The experiments show that our approach allows for an accurate registration of the sensor data without requiring an explicit data association or model-specific adaptations to datasets or sensors. Our approach exploits the different cues in a natural and consistent way and the registration can be done at framerate for a typical range or imaging sensor."
]
} |
1905.02265 | 2947619145 | In order to train a computer agent to play a text-based computer game, we must represent each hidden state of the game. A Long Short-Term Memory (LSTM) model running over observed texts is a common choice for state construction. However, a normal Deep Q-learning Network (DQN) for such an agent requires millions of steps of training or more to converge. As such, an LSTM-based DQN can take tens of days to finish the training process. Though we can use a Convolutional Neural Network (CNN) as a text-encoder to construct states much faster than the LSTM, doing so without an understanding of the syntactic context of the words being analyzed can slow convergence. In this paper, we use a fast CNN to encode position- and syntax-oriented structures extracted from observed texts as states. We additionally augment the reward signal in a universal and practical manner. Together, we show that our improvements can not only speed up the process by one order of magnitude but also learn a superior agent. | Several works @cite_17 @cite_13 @cite_12 @cite_9 @cite_4 @cite_6 @cite_1 also build agents for text-based games based on the DQN approach designed for action video games @cite_15 . One key consideration when learning to play text-based games is how to represent game states. Instead of using trajectories, @cite_4 @cite_5 use different methods to represent states. Some games allow the use of the special actions and to describe the current environment and the player's belongings, and use the combination of the two instead of the trajectory as states. Our method is more generalized, and avoids the use of and inventory at every step, which are extra steps that, in certain games (e.g. games with fighting), could lead to a dead state. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2604468927",
"2803814120",
"2614188047",
"2810346659",
"2963771109",
"2145339207",
"2964179661",
"2963167310",
"1934909785"
],
"abstract": [
"",
"Text-based games are suitable test-beds for designing agents that can learn by interaction with the environment in the form of natural language text. Very recently, deep reinforcement learning based agents have been successfully applied for playing text-based games. In this paper, we explore the possibility of designing a single agent to play several text-based games and of expanding the agent's vocabulary using the vocabulary of agents trained for multiple games. To this extent, we explore the application of recently proposed policy distillation method for video games to the text-based game setting. We also use text-based games as a test-bed to analyze and hence understand policy distillation approach in detail.",
"The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition.",
"We introduce TextWorld, a sandbox learning environment for the training and evaluation of RL agents on text-based games. TextWorld is a Python library that handles interactive play-through of text games, as well as backend functions like state tracking and reward assignment. It comes with a curated list of games whose features and challenges we have analyzed. More significantly, it enables users to handcraft or automatically generate new games. Its generative mechanisms give precise control over the difficulty, scope, and language of constructed games, and can be used to relax challenges inherent to commercial text games like partial observability and sparse rewards. By generating sets of varied but similar games, TextWorld can also be used to study generalization and transfer learning. We cast text-based games in the Reinforcement Learning formalism, use our framework to develop a set of benchmark games, and evaluate several baseline agents on this set and the curated list.",
"Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.",
"",
"In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1"
]
} |
1905.02265 | 2947619145 | In order to train a computer agent to play a text-based computer game, we must represent each hidden state of the game. A Long Short-Term Memory (LSTM) model running over observed texts is a common choice for state construction. However, a normal Deep Q-learning Network (DQN) for such an agent requires millions of steps of training or more to converge. As such, an LSTM-based DQN can take tens of days to finish the training process. Though we can use a Convolutional Neural Network (CNN) as a text-encoder to construct states much faster than the LSTM, doing so without an understanding of the syntactic context of the words being analyzed can slow convergence. In this paper, we use a fast CNN to encode position- and syntax-oriented structures extracted from observed texts as states. We additionally augment the reward signal in a universal and practical manner. Together, we show that our improvements can not only speed up the process by one order of magnitude but also learn a superior agent. | Text-based games have a much larger action space to explore than video games of the type evaluated previously @cite_15 , which means that the naive application of the DQN leads to slow or even failing convergence. To reduce the action space, action elimination methods that use both reinforcement learning and NLP-related motivation have been applied. @cite_5 use action elimination DQN framework with mathematical bounds to remove unlikely actions, an orthogonal improvement to ours that could be incorporated in future work. @cite_4 explore affordance by using Word2Vec @cite_0 to generate reasonable actions from words, learning, e.g., that eat apple is more reasonable than eat wheel . | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_4"
],
"mid": [
"2153579005",
"2963771109",
"2145339207",
"2604468927"
],
"abstract": [
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.