aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1811.04387
2900179308
Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape, and is limited to observing restricted receptive fields. In an earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we propose a detailed analysis of the proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we expand the unit to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters reduces. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
The dilated convolution @cite_4 @cite_25 was suggested to enhance the resolution of the result and reduce postprocessing in semantic segmentation tasks. In our previous work @cite_47 , we introduced the ACU, which is a generalization of the naive convolution. By introducing position parameters, any shape of a convolution can be defined, and the shape can be learned through formal backpropagation.
{ "cite_N": [ "@cite_47", "@cite_4", "@cite_25" ], "mid": [ "", "1923697677", "2286929393" ], "abstract": [ "", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy." ] }
1811.04387
2900179308
Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape, and is limited to observing restricted receptive fields. In an earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we propose a detailed analysis of the proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we expand the unit to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters reduces. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
Beyond extending receptive areas, there have been many attempts to combine multiple fields. Spatial pyramid pooling @cite_35 was suggested for integrating receptive fields at different scales. GoogleNet @cite_34 @cite_7 @cite_37 formed the Inception module composed of multiple sized convolutions; this can create better features by using less number of parameters. In the segmentation tasks, Deeplab @cite_8 @cite_26 used the atrous spatial-pyramid-pooling layer, which uses multiple filters with different dilation rates. All of this research comprised multiple operations, and no unified component was built that allows the viewing of multiple receptive fields at different scales.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_26", "@cite_7", "@cite_8", "@cite_34" ], "mid": [ "2179352600", "2949605076", "2630837129", "2950179405", "2412782625", "2274287116" ], "abstract": [ "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge" ] }
1811.04387
2900179308
Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape, and is limited to observing restricted receptive fields. In an earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we propose a detailed analysis of the proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we expand the unit to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters reduces. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
Depthwise (or channelwise) convolution is a special case of grouped convolution, in which the number of input and output channels is the same as the number of groups. This implies that each output channel is calculated using only a corresponding input channel. Xception @cite_48 uses separable convolution, which first applies the depthwise convolution, followed by the pointwise convolution. This operation reduces the number of weight parameters efficiently, and the corresponding network achieves better result with less parameters. MobileNet @cite_19 @cite_22 also employs the depthwise convolution to reduce the network size and run fast in embedded devices.
{ "cite_N": [ "@cite_19", "@cite_48", "@cite_22" ], "mid": [ "2612445135", "2951583185", "" ], "abstract": [ "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "" ] }
1811.04454
2899757670
Generating paraphrases, that is, different variations of a sentence conveying the same meaning, is an important yet challenging task in NLP. Automatically generating paraphrases has its utility in many NLP tasks like question answering, information retrieval, conversational systems to name a few. In this paper, we introduce iterative refinement of generated paraphrases within VAE based generation framework. Current sequence generation models lack the capability to (1) make improvements once the sentence is generated; (2) rectify errors made while decoding. We propose a technique to iteratively refine the output using multiple decoders, each one attending on the output sentence generated by the previous decoder. We improve current state of the art results significantly - with over 9 and 28 absolute increase in METEOR scores on Quora question pairs and MSCOCO datasets respectively. We also show qualitatively through examples that our re-decoding approach generates better paraphrases compared to a single decoder by rectifying errors and making improvements in paraphrase structure, inducing variations and introducing new but semantically coherent information.
There has also been some work on improving paraphrase generation models inspired from machine translation. It has been shown that paraphrase pairs obtained using back-translated texts from bilingual machine translation corpora has data quality at par with manually-written English paraphrase pairs @cite_13 . There has been work done in syntactically controlled paraphrase generation as well where parse tree template of paraphrase to be generated is also given as input @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_13" ], "mid": [ "2798139452", "2622000134" ], "abstract": [ "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) \"fool\" pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.", "We consider the problem of learning general-purpose, paraphrastic sentence embeddings in the setting of (2016b). We use neural machine translation to generate sentential paraphrases via back-translation of bilingual sentence pairs. We evaluate the paraphrase pairs by their ability to serve as training data for learning paraphrastic sentence embeddings. We find that the data quality is stronger than prior work based on bitext and on par with manually-written English paraphrase pairs, with the advantage that our approach can scale up to generate large training sets for many languages and domains. We experiment with several language pairs and data sources, and develop a variety of data filtering techniques. In the process, we explore how neural machine translation output differs from human-written sentences, finding clear differences in length, the amount of repetition, and the use of rare words." ] }
1811.04164
2949296737
Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary autoencoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.
Recently, the RNN-based generators have shown improving results in tackling the NLG problems in task oriented-dialogue systems with varied proposed methods, such as HLSTM @cite_6 , SCLSTM @cite_1 , or especially RNN Encoder-Decoder models integrating with attention mechanism, such as Enc-Dec @cite_12 , and RALSTM @cite_3 . However, such models have proved to work well only when providing a sufficient in-domain data since a modest dataset may harm the models' performance.
{ "cite_N": [ "@cite_3", "@cite_1", "@cite_12", "@cite_6" ], "mid": [ "2620635248", "", "2950067852", "2951575317" ], "abstract": [ "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.", "", "In this paper, we explore the inclusion of latent random variables into the dynamic hidden state of a recurrent neural network (RNN) by combining elements of the variational autoencoder. We argue that through the use of high-level latent random variables, the variational RNN (VRNN)1 can model the kind of variability observed in highly structured sequential data such as natural speech. We empirically evaluate the proposed model against related sequential models on four speech datasets and one handwriting dataset. Our results show the important roles that latent random variables can play in the RNN dynamic hidden state.", "In this paper we explore the effect of architectural choices on learning a Variational Autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid some of the major difficulties posed by training VAE models on textual data." ] }
1811.04164
2949296737
Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary autoencoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.
In this context, one can think of a potential solution where the domain adaptation learning is utilized. The source domain, in this scenario, typically contains a sufficient amount of annotated data such that a model can be efficiently built, while there is often little or no labeled data in the target domain. A phrase-based statistical generator @cite_7 using graphical models and active learning, and a multi-domain procedure @cite_8 via data counterfeiting and discriminative training. However, a question still remains as how to build a generator that can directly work well on a scarce dataset.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2161181481", "2951718774" ], "abstract": [ "Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents Bagel, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that Bagel can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data.", "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain." ] }
1811.04164
2949296737
Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary autoencoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.
Neural variational framework for generative models of text have been studied extensively. proposed a recurrent latent variable model for sequential data by integrating latent random variables into hidden state of an RNN. A hierarchical multi scale recurrent neural networks was proposed to learn both hierarchical and temporal representation @cite_10 , while presented a variational autoencoder for unsupervised generative language model. proposed a deep conditional generative model for structured output prediction, whereas introduced a variational neural machine translation that incorporated a continuous latent variable to model underlying semantics of sentence pairs. To solve the exposure-bias problem @cite_9 proposed a seq2seq purely convolutional and deconvolutional autoencoder, proposed to use a dilated CNN decoder in a latent-variable model, or proposed a hybrid VAE architecture with convolutional and deconvolutional components.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "2950304420", "2510842514" ], "abstract": [ "Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.", "Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling." ] }
1811.04115
2900141458
Online video advertising gives content providers the ability to deliver compelling content, reach a growing audience, and generate additional revenue from online media. Recently, advertising strategies are designed to look for original advert(s) in a video frame, and replacing them with new adverts. These strategies, popularly known as product placement or embedded marketing, greatly help the marketing agencies to reach out to a wider audience. However, in the existing literature, such detection of candidate frames in a video sequence for the purpose of advert integration, is done manually. In this paper, we propose a deep-learning architecture called ADNet, that automatically detects the presence of advertisements in video frames. Our approach is the first of its kind that automatically detects the presence of adverts in a video frame, and achieves state-of-the-art results on a public dataset.
The existing works in advert detection primarily revolve around the detection of advertisement clips in a video sequence. Recently, in @cite_13 propose a novel solution for automatically understanding the advertisement content, and analyzing the sentiments around them. worked on the detection of logos in television commercials @cite_12 , using a combination of audio and video features. Acoustic match profiles are also exploited in @cite_14 to accurately locate the adverts in the video sequence. These works on advert detection do not consider identifying a particular object in the video scene, and subsequently integrating a new advert into the same scene. However, commercial companies viz. Mirriad uses patented technology to insert new objects into a video scene.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_12" ], "mid": [ "2120452880", "2963037330", "" ], "abstract": [ "In this paper, we propose a method for detecting and precisely segmenting repeated sections of broadcast streams. This method allows advertisements to be removed and replaced with new ads in redistributed television material. The detection stage starts from acoustic matches and validates the hypothesized matches using the visual channel. Finally, the precise segmentation uses fine-grain acoustic match profiles to determine start and end-points. The approach is both efficient and robust to broadcast noise and differences in broadcaster signals. Our final result is nearly perfect, with better than 99 precision, at a recall rate of 95 for repeated advertisements.", "There is more to images than their objective physical content: for example, advertisements are created to persuade a viewer to take a certain action. We propose the novel problem of automatic advertisement understanding. To enable research on this problem, we create two datasets: an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. Our data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer (What should I do according to this ad, and why should I do it?), and symbolic references ads make (e.g. a dove symbolizes peace). We also analyze the most common persuasive strategies ads use, and the capabilities that computer vision systems should have to understand these strategies. We present baseline classification results for several prediction tasks, including automatically answering questions about the messages of the ads.", "" ] }
1811.04115
2900141458
Online video advertising gives content providers the ability to deliver compelling content, reach a growing audience, and generate additional revenue from online media. Recently, advertising strategies are designed to look for original advert(s) in a video frame, and replacing them with new adverts. These strategies, popularly known as product placement or embedded marketing, greatly help the marketing agencies to reach out to a wider audience. However, in the existing literature, such detection of candidate frames in a video sequence for the purpose of advert integration, is done manually. In this paper, we propose a deep-learning architecture called ADNet, that automatically detects the presence of advertisements in video frames. Our approach is the first of its kind that automatically detects the presence of adverts in a video frame, and achieves state-of-the-art results on a public dataset.
In the area of sport analytics, a few works were done to detect the billboards in the soccer fields. attempted to localize the position of on-field billboards using template matching techniques @cite_0 . in @cite_4 used Hough transform for advertising detection in sport TV. used photometric invariant features for billboard track in soccer video scenes @cite_9 . These techniques mainly used techniques from photogrammetry and traditional signal processing -- neural networks were not fully exploited. Owing to the recent development in the area of artificial intelligence, we propose a AI-driven advert detection neural network in this paper.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4" ], "mid": [ "2069539193", "1984786355", "2039203387" ], "abstract": [ "Billboards are placed on the sides of a soccer field for advertisement during match telecast. Unlike regular commercials, which are introduced during a break, on-field billboards appear on the TV screen at uncertain time instances, in different sizes, and also for different durations. Automated processing of soccer telecasts for detection and analysis of such billboards can provide important information on the effectiveness of this mode of advertising. We propose a method in which shot boundaries are first identified and the type of each shot is determined. Frames within each shot are then segmented to locate possible regions of interests (ROIs) - locations in a frame where billboards are potentially present. Finally, we use a combination of local and global features for detecting individual billboards by matching with a set of given templates.", "In this paper we present a system for the localisation and tracking of billboards in streamed soccer matches. The application area for this research is the delivery of customised content to end users. When international soccer matches are broadcast, the diversity of the audience is very large and advertisers would like to be able to adapt the billboards to the different audiences. By replacing the billboards in the video stream this can be achieved. In order to build a more robust system, photometric invariant features are used. These colour features are less susceptible to the changes in illumination. Sensor noise is dealt with through variable kernel density estimation.", "Precise visibility measuring of billboard advertising is a key element for organizers and broadcasters to make cost effective their sport live relay. However, this activity currently is very manpower and time consuming as it is manually processed for the moment. In this paper we describe a technique for detection of commercial advertisement in sport TV. Based on some a priori knowledge of sport field and commercial advertisement, our technique makes use of fast Hough transform and text's geometry features in order to extract advertisement from sport TV images. Our experiments show that our technique achieves more than 90 accuracy rate." ] }
1811.04407
2899742573
Visual attention serves as a means of feature selection mechanism in the perceptual system. Motivated by Broadbent's leaky filter model of selective attention, we evaluate how such mechanism could be implemented and affect the learning process of deep reinforcement learning. We visualize and analyze the feature maps of DQN on a toy problem Catch, and propose an approach to combine visual selective attention with deep reinforcement learning. We experiment with optical flow-based attention and A2C on Atari games. Experiment results show that visual selective attention could lead to improvements in terms of sample efficiency on tested games. An intriguing relation between attention and batch normalization is also discovered.
Several works implement visual attention using the glimpse network in combination with recurrent models @cite_22 @cite_9 . This approach was later extended to deep reinforcement learning in the Atari games domain, either using recurrent models @cite_8 or glimpse sensory @cite_6 . However, these approaches only attempt to integrate visual attention at input level. Therefore, they resembled foveal vision at retinal level rather than visual selective attention, which might also exist in the deeper structure of the network. These pioneered works showed promising results which encourages us to further study how visual attention could benefit deep reinforcement learning.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_6", "@cite_8" ], "mid": [ "2951527505", "2141399712", "2810136571", "2195446438" ], "abstract": [ "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.", "The paper explores the use of reinforcement learning in tasks that involve image-like (but not necessarily raw-pixel) state representations. It proposes a visual attention operator, related to the concept of visual attention presented in previous works. The operator is tested on the game of Pac-Man, where it allows the use of the same agent with various game layouts, no matter what their dimensions (and thus the size of the image-like representation) are. It is shown that the approach is able to summarize information present in the original representation into a fixed-size glimpse and that the approach is able to outperform several more direct approaches.", "A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by \"soft\" and \"hard\" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions." ] }
1811.04407
2899742573
Visual attention serves as a means of feature selection mechanism in the perceptual system. Motivated by Broadbent's leaky filter model of selective attention, we evaluate how such mechanism could be implemented and affect the learning process of deep reinforcement learning. We visualize and analyze the feature maps of DQN on a toy problem Catch, and propose an approach to combine visual selective attention with deep reinforcement learning. We experiment with optical flow-based attention and A2C on Atari games. Experiment results show that visual selective attention could lead to improvements in terms of sample efficiency on tested games. An intriguing relation between attention and batch normalization is also discovered.
Understanding features learned by deep RL is critical for our goal since we target at combining attention with learned features. Some research visualize and analyze the property of the learned policy by deep Q-network (DQN) @cite_21 based on t-SNE and SAMDP @cite_25 . Others use perturbation to extract the visual features the agent attended to @cite_31 . However, most of these work focused on either the input layer or the last several fully connected layers. Thus, the feature maps in the mid layers and their relation with visual inputs and task reward requires further investigation.
{ "cite_N": [ "@cite_31", "@cite_21", "@cite_25" ], "mid": [ "2765615734", "2145339207", "2950708852" ], "abstract": [ "Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "In recent years there is a growing interest in using deep representations for reinforcement learning. In this paper, we present a methodology and tools to analyze Deep Q-networks (DQNs) in a non-blind matter. Using our tools we reveal that the features learned by DQNs aggregate the state space in a hierarchical fashion, explaining its success. Moreover we are able to understand and describe the policies learned by DQNs for three different Atari2600 games and suggest ways to interpret, debug and optimize of deep neural networks in Reinforcement Learning." ] }
1811.04407
2899742573
Visual attention serves as a means of feature selection mechanism in the perceptual system. Motivated by Broadbent's leaky filter model of selective attention, we evaluate how such mechanism could be implemented and affect the learning process of deep reinforcement learning. We visualize and analyze the feature maps of DQN on a toy problem Catch, and propose an approach to combine visual selective attention with deep reinforcement learning. We experiment with optical flow-based attention and A2C on Atari games. Experiment results show that visual selective attention could lead to improvements in terms of sample efficiency on tested games. An intriguing relation between attention and batch normalization is also discovered.
Hypothetically, introducing visual attention into deep RL can be thought as integrating a different source of visual information into an existing neural network. Similar problems have been studied extensively by the computer vision community. For example, multiplicative fusion with optical flow in CNN is proposed for action recognition @cite_28 . More fusion methods in two-stream network were discussed @cite_24 . As for video prediction, action-conditional architecture also involve a multiplication structure to combine action information @cite_29 .
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_24" ], "mid": [ "2401154299", "2118688707", "2342662179" ], "abstract": [ "Although deep convolutional neural networks (CNNs) have shown remarkable results for feature learning and prediction tasks, many recent studies have demonstrated improved performance by incorporating additional handcrafted features or by fusing predictions from multiple CNNs. Usually, these combinations are implemented via feature concatenation or by averaging output prediction scores from several CNNs. In this paper, we present new approaches for combining different sources of knowledge in deep learning. First, we propose feature amplification, where we use an auxiliary, hand-crafted, feature (e.g. optical flow) to perform spatially varying soft-gating on intermediate CNN feature maps. Second, we present a spatially varying multiplicative fusion method for combining multiple CNNs trained on different sources that results in robust prediction by amplifying or suppressing the feature activations based on their agreement. We test these methods in the context of action recognition where information from spatial and temporal cues is useful, obtaining results that are comparable with state-of-the-art methods and outperform methods using only CNNs and optical flow features.", "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results." ] }
1811.04457
2954959208
Talbot-Lau X-ray phase-contrast imaging is a novel imaging modality, which provides not only an X-ray absorption image, but also additionally a differential phase image and a dark-field image. The dark-field image is related to small angle scattering and has an interesting property when canning oriented structures: the recorded signal depends on the relative orientation of the structure in the imaging system. Exactly this property allows to draw conclusions about the orientation and to reconstruct the structure. However, the reconstruction is a complex, non-trivial challenge. A lot of research was conducted towards this goal in the last years and several reconstruction algorithms were proposed. A key step of the reconstruction algorithm is the inversion of a forward projection model. Up until now, only 2-D projection models are available, with effectively limit the scanning trajectory to a 2-D plane. To obtain true 3-D information, this limitation requires to combine several 2-D scans, which leads to quite complex, impractical acquisitions schemes. Furthermore, it is not possible with these models to use 3-D trajectories that might allow simpler protocols, like for example a helical trajectory. To address these limitations, we propose in this work a very general 3-D projection model. Our projection model defines the dark-field signal dependent on an arbitrarily chosen ray and sensitivity direction. We derive the projection model under the assumption that the observed scatter distribution has a Gaussian shape. We theoretically show the consistency of our model with more constrained existing 2-D models. Furthermore, we experimentally show the compatibility of our model with dark-field measurements of two matchsticks. We believe that this 3-D projection model is an important step towards more flexible trajectories and imaging protocols that are much better applicable in practice.
This orientation-dependency of the dark-field signal introduces a notable difference to traditional X-ray computed tomography. The well-known filtered backprojection (FBP) algorithm is able to reconstruct the signal in a voxel by solving a linear system of equations. However, the use of FBP requires that the signal from a voxel is constant, and thus independent from the viewing direction. To this end, Schaff al @cite_1 proposed to align the grating bars parallel to a 2-D trajectory. In this case, the sensitivity direction is perpendicular to the imaged plane, and the signal in each voxel is constant.
{ "cite_N": [ "@cite_1" ], "mid": [ "2621679013" ], "abstract": [ "Dark-field imaging is a scattering-based X-ray imaging method that can be performed with laboratory X-ray tubes. The possibility to obtain information about unresolvable structures has already seen a lot of interest for both medical and material science applications. Unlike conventional X-ray attenuation, orientation dependent changes of the dark-field signal can be used to reveal microscopic structural orientation. To date, reconstruction of the three-dimensional dark-field signal requires dedicated, highly complex algorithms and specialized acquisition hardware. This severely hinders the possible application of orientation-dependent dark-field tomography. In this paper, we show that it is possible to perform this kind of dark-field tomography with common Talbot-Lau interferometer setups by reducing the reconstruction to several smaller independent problems. This allows for the reconstruction to be performed with commercially available software and our findings will therefore help pave the way for a straightforward implementation of orientation-dependent dark-field tomography." ] }
1811.04350
2900263002
We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. Unlike reverse analysis of learned neural features from previous works, our proposed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features through an implementation of a disentangled representation learning method. Toward this end, our method allows an RL agent to understand self-efficacy by distinguishing its influences from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state space with a specific action condition. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.
Attempts to open the blackbox of DNN and to understand the inner system of neural networks have been made in many recent works @cite_7 @cite_8 @cite_5 @cite_12 . Its inherent learning phenomena are reversely analyzed by observing the resultant learned understructure. While the training progress is also analytically interpreted via information theory @cite_18 @cite_34 , it is still challenging to anticipate how and why high-level features in neural models are learned in a certain way before training them. Since learning a disentangled representation encourages its interpretability @cite_35 @cite_14 , it is previously reported that features of convolutional neural networks (CNN) can also be learned in a visually explainable way @cite_28 through disentangled representation learning.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_7", "@cite_8", "@cite_28", "@cite_5", "@cite_34", "@cite_12" ], "mid": [ "", "2593634001", "2753738274", "2581126224", "2952186574", "2963374347", "2611430843", "2785885194", "2765615734" ], "abstract": [ "", "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the ; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on ph compression of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.", "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "This book provides awareness of the risks and benefits of driverless cars. The book shows how new advances in software and robotics are destroying barriers that have confined self-driving cars for decades. A new kind of artificial intelligence software called deep learning enables cars rapid and accurate visual perception so that human drivers can relax and take their eyes off the road. Driverless cars will offer billions of people all over the world a safer, cleaner, and more convenient mode of transportation. Although the technology is just about ready, car companies and policy makers are not.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.", "As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet's steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes.", "The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.", "Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy." ] }
1811.04350
2900263002
We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. Unlike reverse analysis of learned neural features from previous works, our proposed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features through an implementation of a disentangled representation learning method. Toward this end, our method allows an RL agent to understand self-efficacy by distinguishing its influences from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state space with a specific action condition. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.
Prospection of future states conditioned by current actions is meaningful to RL agents in many ways, and action-conditional (variational) autoencoders are learned to predict sequent states in the works of @cite_37 @cite_16 @cite_24 . DARLA @cite_15 utilizes disentangled latent representations for cross-domain zero-shot adaptations. It aims to prove its representation power in multiple similar but different environments. Our model may also look similar to conditional generative models like Conditional Variational Autoencoders (CVAE) @cite_19 and InfoGan @cite_40 , but these are not directly applicable models to RL domains.
{ "cite_N": [ "@cite_37", "@cite_24", "@cite_19", "@cite_40", "@cite_15", "@cite_16" ], "mid": [ "2795843265", "2751258126", "2188365844", "2434741482", "2739083961", "2118688707" ], "abstract": [ "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https: worldmodels.github.io", "It has been postulated that a good representation is one that disentangles the underlying explanatory factors of variation. However, it remains an open question what kind of training framework could potentially achieve that. Whereas most previous work focuses on the static setting (e.g., with images), we postulate that some of the causal factors could be discovered if the learner is allowed to interact with its environment. The agent can experiment with different actions and observe their effects. More specifically, we hypothesize that some of these factors correspond to aspects of the environment which are independently controllable, i.e., that there exists a policy and a learnable feature for each such aspect of the environment, such that this policy can yield changes in that feature with minimal changes to other features that explain the statistical variations in the observed data. We propose a specific objective function to find such factors and verify experimentally that it can indeed disentangle independently controllable aspects of the environment without any extrinsic reward signal.", "Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.", "Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLA's vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts - even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).", "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs." ] }
1811.04383
2899689262
This work explores adaptations of successful multi-armed bandits policies to the online contextual bandits scenario with binary rewards using binary classification algorithms such as logistic regression as black-box oracles. Some of these adaptations are achieved through bootstrapping or approximate bootstrapping, while others rely on other forms of randomness, resulting in more scalable approaches than previous works, and the ability to work with any type of classification algorithm. In particular, the Adaptive-Greedy algorithm shows a lot of promise, in many cases achieving better performance than upper confidence bound and Thompson sampling strategies, at the expense of more hyperparameters to tune.
The contextual bandits setting has been studied as different variations of the problem formulation, some differing a lot from the one presented here such as the bandits with “expert advise” in @cite_14 and @cite_3 , and some presenting a similar scenario in which the rewards are assumed to be continuous (usually in the range @math ) and the reward-generating functions linear @cite_31 @cite_25 . Particularly, @cite_31 , which as its name suggests uses a linear function estimator with an upper bound on the expected rewards (one estimator per arm, all independent of each other), has proved to be a popular approach and many works build upon it in variations of its proposed scenario, such as when adding similarity information @cite_5 @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_5", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "2116067849", "2077902449", "2950978108", "", "2340290367", "2112420033" ], "abstract": [ "In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T sup -1 3 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of \"experts\" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.", "In the multiarmed bandit problem, a gambler must decide which arm of K nonidentical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the per-round payoff of our algorithm approaches that of the best arm at the rate O(T-1 2). We show by a matching lower bound that this is the best possible. We also prove that our algorithm approaches the per-round payoff of any set of strategies at a similar rate: if the best strategy is chosen from a pool of N strategies, then our algorithm approaches the per-round payoff of the strategy at the rate O((log N1 2 T-1 2). Finally, we apply our results to the problem of playing an unknown repeated matrix game. We show that our algorithm approaches the minimax payoff of the unknown game at the rate O(T-1 2).", "Multi-armed bandit problems are receiving a great deal of attention because they adequately formalize the exploration-exploitation trade-offs arising in several industrially relevant applications, such as online advertisement and, more generally, recommendation systems. In many cases, however, these applications have a strong social component, whose integration in the bandit algorithm could lead to a dramatic performance increase. For instance, we may want to serve content to a group of users by taking advantage of an underlying network of social relationships among them. In this paper, we introduce novel algorithmic approaches to the solution of such networked bandit problems. More specifically, we design and analyze a global strategy which allocates a bandit algorithm to each network node (user) and allows it to \"share\" signals (contexts and payoffs) with the neghboring nodes. We then derive two more scalable variants of this strategy based on different ways of clustering the graph nodes. We experimentally compare the algorithm and its variants to state-of-the-art methods for contextual bandits that do not use the relational information. Our experiments, carried out on synthetic and real-world datasets, show a marked increase in prediction performance obtained by exploiting the network structure.", "", "Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. They have been extensively used in many important practical scenarios, such as display advertising and content recommendation. A common practice estimates the unknown bandit parameters pertaining to each user independently. This unfortunately ignores dependency among users and thus leads to suboptimal solutions, especially for the applications that have strong social components. In this paper, we develop a collaborative contextual bandit algorithm, in which the adjacency graph among users is leveraged to share context and payoffs among neighboring users while online updating. We rigorously prove an improved upper regret bound of the proposed collaborative bandit algorithm comparing to conventional independent bandit algorithms. Extensive experiments on both synthetic and three large-scale real-world datasets verified the improvement of our proposed algorithm against several state-of-the-art contextual bandit algorithms.", "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce." ] }
1811.04383
2899689262
This work explores adaptations of successful multi-armed bandits policies to the online contextual bandits scenario with binary rewards using binary classification algorithms such as logistic regression as black-box oracles. Some of these adaptations are achieved through bootstrapping or approximate bootstrapping, while others rely on other forms of randomness, resulting in more scalable approaches than previous works, and the ability to work with any type of classification algorithm. In particular, the Adaptive-Greedy algorithm shows a lot of promise, in many cases achieving better performance than upper confidence bound and Thompson sampling strategies, at the expense of more hyperparameters to tune.
Approaches taking a supervised learning algorithm as an oracle for a similar setting as presented here but with continuous rewards have been studied before @cite_0 @cite_1 , in which these oracles are fit to the covariates and rewards from each arm separately, and the same strategies from multi-armed bandits have also resulted in good strategies in this setting. Other related problems such as building an optimal oracle or policy with data collected from a past policy have also been studied @cite_2 @cite_20 @cite_24 , but this work only focuses on online policies that start from scratch and continue ad-infinitum.
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_24", "@cite_2", "@cite_20" ], "mid": [ "2807644309", "2790576010", "2951249115", "", "1998427280" ], "abstract": [ "Contextual bandit algorithms are essential for solving many real-world interactive machine learning problems. Despite multiple recent successes on statistically and computationally efficient methods, the practical behavior of these algorithms is still poorly understood. We leverage the availability of large numbers of supervised learning datasets to compare and empirically optimize contextual bandit algorithms, focusing on practical methods that learn by relying on optimization oracles from supervised learning. We find that a recent method (, 2018) using optimism under uncertainty works the best overall. A surprisingly close second is a simple greedy baseline that only explores implicitly through the diversity of contexts, followed by a variant of Online Cover (, 2014) which tends to be more conservative but robust to problem specification by design. Along the way, we also evaluate and improve several internal components of contextual bandit algorithm design. Overall, this is a thorough study and review of contextual bandit methodology.", "A major challenge in contextual bandits is to design general-purpose algorithms that are both practically useful and theoretically well-founded. We present a new technique that has the empirical and computational advantages of realizability-based approaches combined with the flexibility of agnostic methods. Our algorithms leverage the availability of a regression oracle for the value-function class, a more realistic and reasonable oracle than the classification oracles over policies typically assumed by agnostic methods. Our approach generalizes both UCB and LinUCB to far more expressive possible model classes and achieves low regret under certain distributional assumptions. In an extensive empirical evaluation, compared to both realizability-based and agnostic baselines, we find that our approach typically gives comparable or superior results.", "We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes one of @math actions in response to the observed context, and observes the reward only for that chosen action. Our method assumes access to an oracle for solving fully supervised cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only @math oracle calls across all @math rounds, where @math is the number of policies in the policy class we compete against. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.", "", "We study sequential decision making in environments where rewards are only partially observed, but can be modeled as a function of observed contexts and the chosen action by the decision maker. This setting, known as contextual bandits, encompasses a wide variety of applications such as health care, content recommendation and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. @PARASPLIT In this work, we leverage the strengths and overcome the weaknesses of the two approaches by applying the doubly robust estimation technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust estimation uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice in policy evaluation and optimization." ] }
1811.04239
2900439503
Recognizing sEMG (Surface Electromyography) signals belonging to a particular action (e.g., lateral arm raise) automatically is a challenging task as EMG signals themselves have a lot of variations even for the same action due to several factors. To overcome this issue, there should be a proper separation which indicates similar patterns repetitively for a particular action in raw signals. A repetitive pattern is not always matched because the same action can be carried out with different time duration. Thus, a depth sensor (Kinect) was used for pattern identification where three joint angles were recorded continuously which is clearly separable for a particular action while recording sEMG signals. To segment out a repetitive pattern in angle data, MDTW (Moving Dynamic Time Warping) approach is introduced. This technique is allowed to retrieve suspected motion of interest from raw signals. MDTW based on DTW algorithm, but it will be moving through the whole dataset in a pre-defined manner which is capable of picking up almost all the suspected segments inside a given dataset in an optimal way. Elevated bicep curl and lateral arm raise movements are taken as motions of interest to show how the proposed technique can be employed to achieve auto identification and labelling. The full implementation is available at https: github.com GPrathap OpenBCIPython.
A Kinect-based hand movement detection algorithm was developed by @cite_9 , and both EEG and EMG signals were acquired at the same time for the experiment. Kinect was used to detect two different classes of output, the open hand and closed hand. The capabilities of Kinect was under-utilized because the device was used only to detect two events. @cite_23 developed a Human Machine Interface (HMI) using both sEMG and Microsoft Kinect inputs. The architecture is designed to feed the algorithm with either Kinect data or sEMG data from the upper hand to control a human sized service robot. Though the authors have used both sEMG and Kinect, the idea of training the sEMG data with the help of Kinect is not practised. @cite_22 have conducted an experiment on Multichannel sEMG in clinical gait analysis using a portable sEMG device and multiple cameras to analyse the pattern of sEMG along with kinematic and kinetic of the body. The authors were able to extract the body angles and correlate them with the sEMG data.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_23" ], "mid": [ "2059812677", "2009026445", "2003736918" ], "abstract": [ "Monitoring and interpreting (sub)cortical reorganization after stroke may be useful for selecting therapies and improving rehabilitation outcome. To develop computational models that predict behavioral motor improvement from changing brain activation pattern, we are currently working on the implementation of a clinically feasible experimental set-up, which enables recording high quality electroencephalography (EEG) signals during inpatient rehabilitation of upper and lower limbs. The major drawback of current experimental paradigms is the cue-guided repetitive design and the lack of functional movements. In this paper, we assess the usability of the Kinect device (Microsoft Inc., Redmond, WA, USA) for tracking self-paced hand opening and closing movements. Three able-bodied volunteers performed self-paced right hand open-close movement sequences while EEG was recorded from sensorimotor areas and electromyography (EMG) from the right arm from extensor carpi radialis and flexor carpi radialis muscles. The results of the study suggest that the Kinect device allows generation of trigger information that is comparable to the information that can be obtained from EMG.", "Abstract Background Application of surface electromyography (SEMG) to the clinical evaluation of neuromuscular disorders can provide relevant “diagnostic” contributions in terms of nosological classification, localization of focal impairments, detection of pathophysiological mechanisms, and functional assessment. Methods The present review article elaborates on: (i) the technical aspects of the myoelectric signals acquisition within a protocol of clinical gait analysis (multichannel recording, surface vs. deep probes, electrode placing, encumbrance effects), (ii) the sequence of procedures for the subsequent data processing (filtering, averaging, normalization, repeatability control), and (iii) a set of feasible strategies for the final extraction of clinically useful information. Findings Relevant examples of SEMG application to functional diagnosis are provided. Interpretation Emphasis is given to the key role of SEMG along with kinematic and kinetic analysis, for non-invasive assessment of relevant pathophysiological mechanisms potentially hindering the gait function, such as changes in passive muscle–tendon properties (peripheral non-neural component), paresis, spasticity, and loss of selectivity of motor output in functionally antagonist muscles.", "Human-robot control interfaces have received increased attention during the past decades, since the introduction of robots in everyday life. In this paper, a novel Human-Machine Interface (HMI) is developed, which contains two components. One is based on the surface electromyography (sEMG) signal, which is from the human upper limb and the other is based on the Microsoft Kinect sensor. The proposed interface allows the user to control in real time a mobile humanoid robot arm in 3-D space, using upper limb motion estimation based only on sEMG recordings and the Kinect input. The efficiency of the method is verified using real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles." ] }
1811.04374
2949917172
We present an empirical study of applying deep Convolutional Neural Networks (CNN) to the task of fashion and apparel image classification to improve meta-data enrichment of e-commerce applications. Five different CNN architectures were analyzed using clean and pre-trained models. The models were evaluated in three different tasks person detection, product and gender classification, on two small and large scale datasets.
Recently, CBIR has experienced remarkable progress in the fields of image recognition by adopting methods from the area of deep learning using convolutional neural networks (CNNs). A full review of deep learning and convolutional neural networks is provided by @cite_6 . Neural networks and CNNs are not new technologies, but with early successes such as LeNet @cite_9 , it is only recently that they have shown competitive results for tasks such as in the ILSVRC2012 image classification Challenge @cite_1 . With this remarkable reduction in a previously stalling error-rate there has been an explosion of interest in CNNs. Many new architectures and approaches were presented such as @cite_8 , @cite_10 or the @cite_8 . Neural networks have also been applied to metrics learning @cite_0 with applications in image similarity estimation and visual search. Recently two datasets have been published. The MVC Dataset @cite_4 for view-invariant clothing retrieval (161.638) images and the DeepFashion Dataset @cite_12 with 800.000 annotated real life images.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_10", "@cite_12" ], "mid": [ "2410358280", "2950179405", "", "", "1994002998", "2158602558", "2949650786", "2471768434" ], "abstract": [ "Clothing retrieval and clothing style recognition are important and practical problems. They have drawn a lot of attention in recent years. However, the clothing photos collected in existing datasets are mostly of front- or near-front view. There are no datasets designed to study the influences of different viewing angles on clothing retrieval performance. To address view-invariant clothing retrieval problem properly, we construct a challenge clothing dataset, called Multi-View Clothing dataset. This dataset not only has four different views for each clothing item, but also provides 264 attributes for describing clothing appearance. We adopt a state-of-the-art deep learning method to present baseline results for the attribute prediction and clothing retrieval performance. We also evaluate the method on a more difficult setting, cross-view exact clothing item retrieval. Our dataset will be made publicly available for further studies towards view-invariant clothing retrieval.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "", "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available.", "Metric and kernel learning arise in several machine learning applications. However, most existing metric learning algorithms are limited to learning metrics over low-dimensional data, while existing kernel learning algorithms are often limited to the transductive setting and do not generalize to new data points. In this paper, we study the connections between metric learning and kernel learning that arise when studying metric learning as a linear transformation learning problem. In particular, we propose a general optimization framework for learning metrics via linear transformations, and analyze in detail a special case of our framework-that of minimizing the LogDet divergence subject to linear constraints. We then propose a general regularized framework for learning a kernel matrix, and show it to be equivalent to our metric learning framework. Our theoretical connections between metric and kernel learning have two main consequences: 1) the learned kernel matrix parameterizes a linear transformation kernel function and can be applied inductively to new data points, 2) our result yields a constructive method for kernelizing most existing Mahalanobis metric learning formulations. We demonstrate our learning approach by applying it to large-scale real world problems in computer vision, text mining and semi-supervised kernel dimensionality reduction.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion." ] }
1811.04210
2949210751
We propose DecaProp (Densely Connected Attention Propagation), a new densely connected neural architecture for reading comprehension (RC). There are two distinct characteristics of our model. Firstly, our model densely connects all pairwise layers of the network, modeling relationships between passage and query across all hierarchical levels. Secondly, the dense connectors in our network are learned via attention instead of standard residual skip-connectors. To this end, we propose novel Bidirectional Attention Connectors (BAC) for efficiently forging connections throughout the network. We conduct extensive experiments on four challenging RC benchmarks. Our proposed approach achieves state-of-the-art results on all four, outperforming existing baselines by up to @math in absolute F1 score.
Our work is concerned with densely connected networks aimed at improving information flow . While most works are concerned with computer vision tasks or general machine learning, there are several notable works in the NLP domain. @cite_1 proposed Densely Connected BiLSTMs for standard text classification tasks. proposed a co-stacking residual affinity mechanims that includes all pairwise layers of a text matching model in the affinity matrix calculation. In the RC domain, DCN+ used Residual Co-Attention encoders. QANet used residual self-attentive convolution encoders. While the usage of highway residual networks is not an uncommon sight in NLP, the usage of bidirectional attention as a skip-connector is new. Moreover, our work introduces new cross-hierarchical connections, which help to increase the number of interaction interfaces between @math .
{ "cite_N": [ "@cite_1" ], "mid": [ "2787277569" ], "abstract": [ "Deep neural networks have recently been shown to achieve highly competitive performance in many computer vision tasks due to their abilities of exploring in a much larger hypothesis space. However, since most deep architectures like stacked RNNs tend to suffer from the vanishing-gradient and overfitting problems, their effects are still understudied in many NLP tasks. Inspired by this, we propose a novel multi-layer RNN model called densely connected bidirectional long short-term memory (DC-Bi-LSTM) in this paper, which essentially represents each layer by the concatenation of its hidden state and all preceding layers' hidden states, followed by recursively passing each layer's representation to all subsequent layers. We evaluate our proposed model on five benchmark datasets of sentence classification. DC-Bi-LSTM with depth up to 20 can be successfully trained and obtain significant improvements over the traditional Bi-LSTM with the same or even less parameters. Moreover, our model has promising performance compared with the state-of-the-art approaches." ] }
1811.04352
2948754022
Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-the-art traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior.
To effectively utilize words for IMEs, many natural language processing (NLP) techniques have been applied. @cite_22 introduced a joint maximum n-gram model with syllabification for grapheme-to-phoneme conversion. @cite_24 used a trigram language model and incorporated word segmentation to convert pinyin sequence to Chinese word sequences. @cite_26 proposed an iterative algorithm to discover unseen words in corpus for building Chinese language model. @cite_28 described a method enlarging the vocabulary which can capture the context information. However, all the above methods require an predefined fixed vocabulary.
{ "cite_N": [ "@cite_24", "@cite_28", "@cite_26", "@cite_22" ], "mid": [ "2123672980", "2142499170", "2393578570", "67332896" ], "abstract": [ "Chinese input is one of the key challenges for Chinese PC users. This paper proposes a statistical approach to Pinyin-based Chinese input. This approach uses a trigram-based language model and a statistically based segmentation. Also, to deal with real input, it also includes a typing model which enables spelling correction in sentence-based Pinyin input, and a spelling model for English which enables modeless Pinyin input.", "The noisy channel model approach is successfully applied to various natural language processing tasks. Currently the main research focus of this approach is adaptation methods, how to capture characteristics of words and expressions in a target domain given example sentences in that domain. As a solution we describe a method enlarging the vocabulary of a language model to an almost infinite size and capturing their context information. Especially the new method is suitable for languages in which words are not delimited by whitespace. We applied our method to a phoneme-to-text transcription task in Japanese and reduced about 10 of the errors in the results of an existing method.", "The lexicon quality affects the performance of Chinese language model directly.However,the lexicon compi- lation is separated from Chinese language modeling,resulting in two severe problems:firstly,the current language models can not achieve the optimal performance due to the limitation of lexicon scale;secondly,it is hard to apply the current language models to special areas due to the absence of lexicon.This paper aims to improve the performance of Chinese language model by constructing optimal lexicon.Meanwhile,it can self-adapt the area of training corpus automatically. Firstly,this paper combines the lexicon compilation with Chinese language modeling and proposes an iterative algorithm framework.Under this framework,it proposes the concept of character lexical significance(CLS)to describe Chinese lexical principle.Together with the statistical features,a multi-feature based algorithm is proposed for Chinese lexicon construction.Finally,it proposes two heuristic rules to adjust the parameters so as to self-adapt the area of training corpus.From the experimental results,it is found that the system can obtain the optimal Chinese lexicon as well as the high-performance Chinese language model.Moreover,the proposed techniques can self-adapt the area of training corpus successfully.", "A process for producing a strong and water-resistant bond between aluminum or an aluminum-based alloy and a polysulphide material by means of a primer which is applied to the met al, characterized in that the primer is a solution of a strongly basic alkali met al compound such as alkali met al hydroxides, phosphates and carbonates. The present invention also includes articles treated by the aforementioned process." ] }
1811.04352
2948754022
Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-the-art traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior.
For either pinyin-to-character for Chinese IMEs or kana-to-kanji for Japanese IMEs, a few language model training methods have been developed. @cite_4 proposed probabilistic based language model for IME. @cite_12 presented an online discriminative training. @cite_17 proposed a statistic model using the frequent nearby set of the target word. @cite_1 used collocations and k-means clustering to improve the n-pos model for Japanese IME. @cite_29 put forward a PTC framework based on support vector machine. @cite_16 and @cite_27 respectively applied statistic machine translation (SMT) to Japanese pronunciation prediction and Chinese P2C tasks. @cite_9 @cite_19 regarded the P2C as a translation between two languages and solved it in neural machine translation framework.
{ "cite_N": [ "@cite_4", "@cite_29", "@cite_9", "@cite_1", "@cite_19", "@cite_27", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "", "2376198360", "2398174936", "2252213328", "2804439688", "2178437762", "100723821", "2160988762", "" ], "abstract": [ "", "In order to overcome the difficulty in fusing more features into n-gram,a Pinyin-to-Character conversion model based on Support Vector Machines(SVM) is proposed in this paper,providing the ability of integrating more statistical information.Meanwhile,the excellent generalization performance effectively overcomes the overfitting problem existing in the traditional model,and the soft margin strategy overcomes the noise problem to some extent in the corpus.Furthermore,rough set theory is applied to extract complicated and long distance features,which are fused into SVM model as a new kind of feature,and solve the problem that traditional models suffer from fusing long distance dependency.The experimental result showed that this SVM Pinyin-to-Character conversion model achieved 1.2 higher precision than the trigram model,which adopted absolute smoothing algorithm,moreover,the SVM model with long distance features achieved 1.6 higher accuracy.", "Neural network language models (NNLMs) have been shown to outperform traditional ngram language model. However, too high computational cost of NNLMs becomes the main obstacle of directly integrating it into pinyin IME that normally requires a real-time response. In this paper, an efficient solution is proposed by converting NNLMs into back-off n-gram language models, and we integrate the converted NNLM into pinyin IME. Our experimental results show that the proposed method gives better decoding predictive performance for pinyin IME with satisfied efficiency.", "Kana-Kanji conversion is known as one of the representative applications of Natural Language Processing (NLP) for the Japanese language. The N-pos model, presenting the probability of a Kanji candidate sequence by the product of bi-gram Part-of-Speech (POS) probabilities and POS-to-word emission probabilities, has been successfully applied in a number of well-known Japanese Input Method Editor (IME) systems. However, since N-pos model is an approximation of n-gram word-based language model, important word-to-word collocation information are lost during this compression and lead to a drop of the conversion accuracies. In order to overcome this problem, we propose ways to improve current N-pos model. One way is to append the highfrequency collocations and the other way is to sub-categorize the huge POS sets to make them more representative. Experiments on large-scale data verified our proposals.", "", "This paper introduces a new approach to solve the Chinese Pinyin-to-character (PTC) conversion problem. The conversion from Chinese Pinyin to Chinese character can be regarded as a transformation between two different languages (from the Latin writing system of Chinese Pinyin to the character form of Chinese,Hanzi), which can be naturally solved by machine translation framework. PTC problem is usually regarded as a sequence labeling problem, however, it is more difficult than any other general sequence labeling problems, since it requires a large label set of all Chinese characters for the labeling task. The essential difficulty of the task lies in the high degree of ambiguities of Chinese characters corresponding to Pinyins. Our approach is novel in that it effectively combines the features of continuous source sequence and target sequence. The experimental results show that the proposed approach is much faster, besides, we got a better result and outperformed the existing sequence labeling approaches.", "This paper addresses the problem of predicting the pronunciation of Japanese text. The difficulty of this task lies in the high degree of ambiguity in the pronunciation of Japanese characters and words. Previous approaches have either considered the task as a word-level classification problem based on a dictionary, which does not fare well in handling out-of-vocabulary (OOV) words; or solely focused on the pronunciation prediction of OOV words without considering the contextual disambiguation of word pronunciations in text. In this paper, we propose a unified approach within the framework of phrasal statistical machine translation (SMT) that combines the strengths of the dictionary-based and substring-based approaches. Our approach is novel in that we combine wordand character-based pronunciations from a dictionary within an SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words. We show that based on an extensive evaluation on various test sets, our model significantly outperforms the previous state-of-the-art systems, achieving around 90 accuracy in most domains.", "We present a discriminative structureprediction model for the letter-to-phoneme task, a crucial step in text-to-speech processing. Our method encompasses three tasks that have been previously handled separately: input segmentation, phoneme prediction, and sequence modeling. The key idea is online discriminative training, which updates parameters according to a comparison of the current system output to the desired output, allowing us to train all of our components together. By folding the three steps of a pipeline approach into a unified dynamic programming framework, we are able to achieve substantial performance gains. Our results surpass the current state-of-the-art on six publicly available data sets representing four different languages.", "" ] }
1811.04352
2948754022
Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-the-art traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior.
All the above mentioned work, however, still rely on a predefined fixed vocabulary, and IME users have no chance to refine their own dictionary online. @cite_3 is mostly related to this work, who also offers an online mechanism to adaptively update user vocabulary. The key difference between these two work lies on that this work presents the first neural solution with online vocabulary adaptation as for our best knowledge.
{ "cite_N": [ "@cite_3" ], "mid": [ "2773260465" ], "abstract": [ "Chinese input methods are used to convert pinyin sequence or other Latin encoding systems into Chinese character sentences. For more effective pinyin-to-character conversion, typical Input Method Engines (IMEs) rely on a predefined vocabulary that demands manually maintenance on schedule. For the purpose of removing the inconvenient vocabulary setting, this work focuses on automatic wordhood acquisition by fully considering that Chinese inputting is a free human-computer interaction procedure. Instead of strictly defining words, a loose word likelihood is introduced for measuring how likely a character sequence can be a user-recognized word with respect to using IME. Then an online algorithm is proposed to adjust the word likelihood or generate new words by comparing user true choice for inputting and the algorithm prediction. The experimental results show that the proposed solution can agilely adapt to diverse typings and demonstrate performance approaching highly-optimized IME with fixed vocabulary." ] }
1811.03736
2900050378
In this paper, we proposed an integrated model of semantic-aware and contrast-aware saliency combining both bottom-up and top-down cues for effective saliency estimation and eye fixation prediction. The proposed model processes visual information using two pathways. The first pathway aims to capture the attractive semantic information in images, especially for the presence of meaningful objects and object parts such as human faces. The second pathway is based on multi-scale on-line feature learning and information maximization, which learns an adaptive sparse representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated dynamically using maxima normalization. We investigate two different implementations of the semantic pathway including an End-to-End deep neural network solution and a dynamic feature integration solution, resulting in the SCA and SCAFI model respectively. Experimental results on artificial images and 5 popular benchmark datasets demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Hou @cite_40 proposed a highly efficient saliency detection algorithm by exploring the spectral residual () in the frequency domain. highlights the salient regions by manipulating the amplitude spectrum of the images' Fourier transformation. Inspired by @cite_40 , Guo @cite_44 achieved fast and robust spatio-temporal saliency detection by using the phase spectrum of Quaternion Fourier Transform. As a theoretical revision of , @cite_56 proposed a new visual descriptor named Image Signature () based on the Discrete Fourier Transform of images, which was proved to be more effective in explaining saccadic eye movements and change blindness of human vision.
{ "cite_N": [ "@cite_44", "@cite_40", "@cite_56" ], "mid": [ "2170869852", "2146103513", "2037328649" ], "abstract": [ "Salient areas in natural scenes are generally regarded as the candidates of attention focus in human eyes, which is the key stage in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), neuromorphic vision toolkit (NVT) and etc., but they demand high computational cost and their remarkable results mostly rely on the choice of parameters. Recently a simple and fast approach based on Fourier transform called spectral residual (SR) was proposed, which used SR of the amplitude spectrum to obtain the saliency map. The results are good, but the reason is questionable.", "The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method.", "We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods." ] }
1811.03736
2900050378
In this paper, we proposed an integrated model of semantic-aware and contrast-aware saliency combining both bottom-up and top-down cues for effective saliency estimation and eye fixation prediction. The proposed model processes visual information using two pathways. The first pathway aims to capture the attractive semantic information in images, especially for the presence of meaningful objects and object parts such as human faces. The second pathway is based on multi-scale on-line feature learning and information maximization, which learns an adaptive sparse representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated dynamically using maxima normalization. We investigate two different implementations of the semantic pathway including an End-to-End deep neural network solution and a dynamic feature integration solution, resulting in the SCA and SCAFI model respectively. Experimental results on artificial images and 5 popular benchmark datasets demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Despite the above models, there are many other insightful works that detect visual saliency using different types of measures, e.g. Bayesian Surprise @cite_6 , Center-Surround Discriminant Power @cite_52 , Short-Term Self-Information @cite_25 , Spatially Weighted Dissimilarity @cite_8 , Site Entropy Rate @cite_32 , Rarity @cite_57 and Self-Resemblance @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_32", "@cite_52", "@cite_6", "@cite_57", "@cite_25", "@cite_17" ], "mid": [ "2116724443", "2158987471", "", "", "1989779308", "1976782722", "2034436892" ], "abstract": [ "In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.", "In this paper, we propose a new computational model for visual saliency derived from the information maximization principle. The model is inspired by a few well acknowledged biological facts. To compute the saliency spots of an image, the model first extracts a number of sub-band feature maps using learned sparse codes. It adopts a fully-connected graph representation for each feature map, and runs random walks on the graphs to simulate the signal information transmission among the interconnected neurons. We propose a new visual saliency measure called Site Entropy Rate (SER) to compute the average information transmitted from a node (neuron) to all the others during the random walk on the graphs network. This saliency definition also explains the center-surround mechanism from computation aspect. We further extend our model to spatial-temporal domain so as to detect salient spots in videos. To evaluate the proposed model, we do extensive experiments on psychological stimuli, two well known image data sets, as well as a public video dataset. The experiments demonstrate encouraging results that the proposed model achieves the state-of-the-art performance of saliency detection in both still images and videos.", "", "", "Abstract For the last decades, computer-based visual attention models aiming at automatically predicting human gaze on images or videos have exponentially increased. Even if several families of methods have been proposed and a lot of words like centre-surround difference, contrast, rarity, novelty, redundancy, irregularity, surprise or compressibility have been used to define those models, they are all based on the same and unique idea of information innovation in a given context . In this paper, we propose a novel saliency prediction model, called RARE2012, which selects information worthy of attention based on multi-scale spatial rarity. RARE2012 is then evaluated using two complementary metrics, the Normalized Scanpath Saliency (NSS) and the Area Under the Receiver Operating Characteristic (AUROC) against 13 recently published saliency models. It is shown to be the best for NSS metric and second best for AUROC metric on three publicly available datasets (Toronto, Koostra and Jian Li). Finally, based on an additional comparative statistical analysis and the effect-size Hedge' g ⁎ measure, RARE2012 outperforms, at least slightly, the other models while considering both metrics on the three databases as a whole.", "Representation and measurement are two important issues for saliency models. Different with previous works that learnt sparse features from large scale natural statistics, we propose to learn features from short-term statistics of single images. For saliency measurement, we define background firing rate (BFR) for each sparse feature, and then we propose to use feature activation rate (FAR) to measure the bottom-up visual saliency. The proposed FAR measure is biological plausible and easy to compute, also with satisfied performance. Experiments on human eye fixations and psychological patterns demonstrate the effectiveness and robustness of our proposed method.", "We present a novel unified framework for both static and space -time saliency detection. Our method is a bottom-up approach and computes so-called local regression kernels (i.e., local descriptors) from the given image (or a video), which measure the likeness of a pixel (or voxel) to its surroundings. Visual saliency is then computed using the said “self-resemblance” measure. The framework results in a saliency map where each pixel (or voxel) indicates the statistical li kelihood of saliency of a feature matrix given its surrounding feature matrices. As a similarity measure, matrix cosine similarity (a generalization of cosine similarity) is employed. State of the art performance is demonstrated on commonly used human eye fixation data (static scenes [5] and dynamic scenes [16]) and some psychological patterns." ] }
1811.03736
2900050378
In this paper, we proposed an integrated model of semantic-aware and contrast-aware saliency combining both bottom-up and top-down cues for effective saliency estimation and eye fixation prediction. The proposed model processes visual information using two pathways. The first pathway aims to capture the attractive semantic information in images, especially for the presence of meaningful objects and object parts such as human faces. The second pathway is based on multi-scale on-line feature learning and information maximization, which learns an adaptive sparse representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated dynamically using maxima normalization. We investigate two different implementations of the semantic pathway including an End-to-End deep neural network solution and a dynamic feature integration solution, resulting in the SCA and SCAFI model respectively. Experimental results on artificial images and 5 popular benchmark datasets demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Provided with enough training data, deep model can achieve ground-breaking performance that are far better than traditional methods, sometime even outperform humans. The ensembles of Deep Networks () @cite_55 is the first attemp at modeling saiency with deep models, which combines three different convnet layers using a linear classifer. Different from , recent models such as @cite_54 , @cite_27 and @cite_31 integrate pre-trained layers from large-scale CNN models. Especially, @cite_27 , @cite_53 and @cite_13 use the network pre-trained on ImageNet to initialize their convolutional layers and then train the rest layers based on ground-truth saliency maps generated using human eye-fixation data.
{ "cite_N": [ "@cite_55", "@cite_54", "@cite_53", "@cite_27", "@cite_31", "@cite_13" ], "mid": [ "2078903912", "1946606198", "2210809762", "2212216676", "2472782738", "2288514685" ], "abstract": [ "Saliency prediction typically relies on hand-crafted (multiscale) features that are combined in different ways to form a \"master\" saliency map, which encodes local image conspicuity. Recent improvements to the state of the art on standard benchmarks such as MIT1003 have been achieved mostly by incrementally adding more and more hand-tuned features (such as car or face detectors) to existing models. In contrast, we here follow an entirely automatic data-driven approach that performs a large-scale search for optimal features. We identify those instances of a richly-parameterized bio-inspired model family (hierarchical neuromorphic networks) that successfully predict image saliency. Because of the high dimensionality of this parameter space, we use automated hyperparameter optimization to efficiently guide the search. The optimal blend of such multilayer features combined with a simple linear classifier achieves excellent performance on several image saliency benchmarks. Our models outperform the state of the art on MIT1003, on which features and classifiers are learned. Without additional training, these models generalize well to two other image saliency data sets, Toronto and NUSEF, despite their different image content. Finally, our algorithm scores best of all the 23 models evaluated to date on the MIT300 saliency challenge, which uses a hidden test set to facilitate an unbiased comparison.", "Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of (2012), we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes.", "Understanding and predicting the human visual attentional mechanism is an active area of research in the fields of neuroscience and computer vision. In this work, we propose DeepFix, a first-of-its-kind fully convolutional neural network for accurate saliency prediction. Unlike classical works which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant which prevents them from modeling location dependent patterns (e.g. centre-bias). Our network overcomes this limitation by incorporating a novel Location Biased Convolutional layer. We evaluate our model on two challenging eye fixation datasets -- MIT300, CAT2000 and show that it outperforms other recent approaches by a significant margin.", "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.", "In this paper we consider the problem of visual saliency modeling, including both human gaze prediction and salient object segmentation. The overarching goal of the paper is to identify high level considerations relevant to deriving more sophisticated visual saliency models. A deep learning model based on fully convolutional networks (FCNs) is presented, which shows very favorable performance across a wide variety of benchmarks relative to existing proposals. We also demonstrate that the manner in which training data is selected, and ground truth treated is critical to resulting model behaviour. Recent efforts have explored the relationship between human gaze and salient objects, and we also examine this point further in the context of FCNs. Close examination of the proposed and alternative models serves as a vehicle for identifying problems important to developing more comprehensive models going forward.", "The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors' knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction." ] }
1811.03736
2900050378
In this paper, we proposed an integrated model of semantic-aware and contrast-aware saliency combining both bottom-up and top-down cues for effective saliency estimation and eye fixation prediction. The proposed model processes visual information using two pathways. The first pathway aims to capture the attractive semantic information in images, especially for the presence of meaningful objects and object parts such as human faces. The second pathway is based on multi-scale on-line feature learning and information maximization, which learns an adaptive sparse representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated dynamically using maxima normalization. We investigate two different implementations of the semantic pathway including an End-to-End deep neural network solution and a dynamic feature integration solution, resulting in the SCA and SCAFI model respectively. Experimental results on artificial images and 5 popular benchmark datasets demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Benefit from the powerful visual representation embedded in net, the above models significantly outperform traditional methods in eye-fixation prediction task on almost all benchmark datasets. In this paper, we mainly compared our model with @cite_27 , @cite_4 and @cite_13 . : http: salicon.net demo # and : https: deepgaze.bethgelab.org provide on-line Web service to receive image submissions and generate saliency maps. : https: github.com imatge-upc saliency-2016-cvpr is a fully open source End-to-End model with good computation efficiency and state-of-the-art performance.
{ "cite_N": [ "@cite_13", "@cite_27", "@cite_4" ], "mid": [ "2288514685", "2212216676", "2529173830" ], "abstract": [ "The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors' knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.", "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.", "Here we present DeepGaze II, a model that predicts where people look in images. The model uses the features from the VGG-19 deep neural network trained to identify objects in images. Contrary to other saliency models that use deep features, here we use the VGG features for saliency prediction with no additional fine-tuning (rather, a few readout layers are trained on top of the VGG features to predict saliency). The model is therefore a strong test of transfer learning. After conservative cross-validation, DeepGaze II explains about 87 of the explainable information gain in the patterns of fixations and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark. These results corroborate the finding from DeepGaze I (which explained 56 of the explainable information gain), that deep features trained on object recognition provide a versatile feature space for performing related visual tasks. We explore the factors that contribute to this success and present several informative image examples. A web service is available to compute model predictions at this http URL." ] }
1811.04091
2900128981
The task of multiple people tracking in monocular videos is challenging because of the numerous difficulties involved: occlusions, varying environments, crowded scenes, camera parameters and motion. In the tracking-by-detection paradigm, most approaches adopt person re-identification techniques based on computing the pairwise similarity between detections. However, these techniques are less effective in handling long-term occlusions. By contrast, tracklet (a sequence of detections) re-identification can improve association accuracy since tracklets offer a richer set of visual appearance and spatio-temporal cues. In this paper, we propose a tracking framework that employs a hierarchical clustering mechanism for merging tracklets. To this end, tracklet re-identification is performed by utilizing a novel multi-stage deep network that can jointly reason about the visual appearance and spatio-temporal properties of a pair of tracklets, thereby providing a robust measure of affinity. Experimental results on the challenging MOT16 and MOT17 benchmarks show that our method significantly outperforms state-of-the-arts.
Most multi-object tracking approaches are based on the tracking-by-detection paradigm @cite_45 @cite_19 @cite_33 , where tracking is formulated as a data association problem between the detections extracted from a video using object detectors.
{ "cite_N": [ "@cite_19", "@cite_45", "@cite_33" ], "mid": [ "2084652104", "2607008612", "" ], "abstract": [ "We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.", "We state a combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph. This problem offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking. Conceptually, it generalizes the unconstrained integer quadratic program and the minimum cost lifted multicut problem, both of which are NP-hard. In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time. To demonstrate the effectiveness of these algorithms in tackling computer vision tasks, we apply them to instances of the problem that we construct from published data, using published algorithms. We report state-of-the-art application-specific accuracy in the three above-mentioned applications.", "" ] }
1811.04091
2900128981
The task of multiple people tracking in monocular videos is challenging because of the numerous difficulties involved: occlusions, varying environments, crowded scenes, camera parameters and motion. In the tracking-by-detection paradigm, most approaches adopt person re-identification techniques based on computing the pairwise similarity between detections. However, these techniques are less effective in handling long-term occlusions. By contrast, tracklet (a sequence of detections) re-identification can improve association accuracy since tracklets offer a richer set of visual appearance and spatio-temporal cues. In this paper, we propose a tracking framework that employs a hierarchical clustering mechanism for merging tracklets. To this end, tracklet re-identification is performed by utilizing a novel multi-stage deep network that can jointly reason about the visual appearance and spatio-temporal properties of a pair of tracklets, thereby providing a robust measure of affinity. Experimental results on the challenging MOT16 and MOT17 benchmarks show that our method significantly outperforms state-of-the-arts.
Data association can be performed either on individual detections @cite_45 @cite_1 , or a set of confident and short tracklets @cite_7 @cite_62 which are generated by first performing a low level data association to group detections. A well-known representation of the tracking-by-detection paradigm is to present each detection as a node in a graph, with edges representing the likelihood that connected detections belong to the same person. This data association problem can be solved using Conditional Random Field inference @cite_25 , network flow optimization @cite_40 @cite_56 , maximum multi-clique @cite_42 , greedy algorithms @cite_26 , or subgraph decomposition @cite_28 .
{ "cite_N": [ "@cite_62", "@cite_26", "@cite_7", "@cite_28", "@cite_42", "@cite_1", "@cite_56", "@cite_40", "@cite_45", "@cite_25" ], "mid": [ "2805447362", "", "", "2007352603", "1932380673", "2509412582", "", "", "2607008612", "2035153336" ], "abstract": [ "In this paper, we propose to exploit the interactions between non-associable tracklets to facilitate multi-object tracking. We introduce two types of tracklet interactions, close interaction and distant interaction. The close interaction imposes physical constraints between two temporally overlapping tracklets, and more importantly, allows us to learn local classifiers to distinguish targets that are close to each other in the spatiotemporal domain. The distant interaction, on the other hand, accounts for the higher order motion and appearance consistency between two temporally isolated tracklets. Our approach is modeled as a binary labeling problem and solved using the efficient quadratic pseudo-Boolean optimization. It yields promising tracking performance on the challenging PETS09 and MOT16 dataset.", "", "", "Tracking multiple targets in a video, based on a finite set of detection hypotheses, is a persistent problem in computer vision. A common strategy for tracking is to first select hypotheses spatially and then to link these over time while maintaining disjoint path constraints [14, 15, 24]. In crowded scenes multiple hypotheses will often be similar to each other making selection of optimal links an unnecessary hard optimization problem due to the sequential treatment of space and time. Embracing this observation, we propose to link and cluster plausible detections jointly across space and time. Specifically, we state multi-target tracking as a Minimum Cost Subgraph Multicut Problem. Evidence about pairs of detection hypotheses is incorporated whether the detections are in the same frame, neighboring frames or distant frames. This facilitates long-range re-identification and within-frame clustering. Results for published benchmark sequences demonstrate the superiority of this approach.", "Data association is the backbone to many multiple object tracking (MOT) methods. In this paper we formulate data association as a Generalized Maximum Multi Clique problem (GMMCP). We show that this is the ideal case of modeling tracking in real world scenario where all the pairwise relationships between targets in a batch of frames are taken into account. Previous works assume simplified version of our tracker either in problem formulation or problem optimization. However, we propose a solution using GMMCP where no simplification is assumed in either steps. We show that the NP hard problem of GMMCP can be formulated through Binary-Integer Program where for small and medium size MOT problems the solution can be found efficiently. We further propose a speed-up method, employing Aggregated Dummy Nodes for modeling occlusion and miss-detection, which reduces the size of the input graph without using any heuristics. We show that, using the speedup method, our tracker lends itself to real-time implementation which is plausible in many applications. We evaluated our tracker on six challenging sequences of Town Center, TUD-Crossing, TUD-Stadtmitte, Parking-lot 1, Parking-lot 2 and Parking-lot pizza and show favorable improvement against state of art.", "Addressing the problem of Joint segmentation, reconstruction and tracking of multiple targets from multi-view videos.Casting the problem as data association among extracted superpixels from images.Optimizing a flow graph to solve the global data association in order to segment and reconstruct targets.Fast obtaining the solution of graph by performing two stages of optimization.Conduction experimental results on known public datasets and analyzing the proposed algorithm. Tracking of multiple targets in a crowded environment using tracking by detection algorithms has been investigated thoroughly. Although these techniques are quite successful, they suffer from the loss of much detailed information about targets in detection boxes, which is highly desirable in many applications like activity recognition. To address this problem, we propose an approach that tracks superpixels instead of detection boxes in multi-view video sequences. Specifically, we first extract superpixels from detection boxes and then associate them within each detection box, over several views and time steps that lead to a combined segmentation, reconstruction, and tracking of superpixels. We construct a flow graph and incorporate both visual and geometric cues in a global optimization framework to minimize its cost. Hence, we simultaneously achieve segmentation, reconstruction and tracking of targets in video. Experimental results confirm that the proposed approach outperforms state-of-the-art techniques for tracking while achieving comparable results in segmentation.", "", "", "We state a combinatorial optimization problem whose feasible solutions define both a decomposition and a node labeling of a given graph. This problem offers a common mathematical abstraction of seemingly unrelated computer vision tasks, including instance-separating semantic segmentation, articulated human body pose estimation and multiple object tracking. Conceptually, it generalizes the unconstrained integer quadratic program and the minimum cost lifted multicut problem, both of which are NP-hard. In order to find feasible solutions efficiently, we define two local search algorithms that converge monotonously to a local optimum, offering a feasible solution at any time. To demonstrate the effectiveness of these algorithms in tackling computer vision tasks, we apply them to instances of the problem that we construct from published data, using published algorithms. We report state-of-the-art application-specific accuracy in the three above-mentioned applications.", "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods." ] }
1811.04091
2900128981
The task of multiple people tracking in monocular videos is challenging because of the numerous difficulties involved: occlusions, varying environments, crowded scenes, camera parameters and motion. In the tracking-by-detection paradigm, most approaches adopt person re-identification techniques based on computing the pairwise similarity between detections. However, these techniques are less effective in handling long-term occlusions. By contrast, tracklet (a sequence of detections) re-identification can improve association accuracy since tracklets offer a richer set of visual appearance and spatio-temporal cues. In this paper, we propose a tracking framework that employs a hierarchical clustering mechanism for merging tracklets. To this end, tracklet re-identification is performed by utilizing a novel multi-stage deep network that can jointly reason about the visual appearance and spatio-temporal properties of a pair of tracklets, thereby providing a robust measure of affinity. Experimental results on the challenging MOT16 and MOT17 benchmarks show that our method significantly outperforms state-of-the-arts.
By learning discriminative feature representations, deep learning has enhanced many computer vision applications such as image classification @cite_0 , video background subtraction @cite_10 , and pedestrian detection @cite_17 . In the context of tracking, Convolutional Neural Networks (CNN) have been utilized to learn feature representations of targets instead of using heuristic and hand-crafted features @cite_37 @cite_23 @cite_16 . CNNs have also been utilized for modeling the similarity between a pair of detections @cite_18 @cite_13 . @cite_57 models the appearance with temporal coherency by designing a quadruplet CNN. Adopting a different network structure, Milan @cite_15 propose an end-to-end Recurrent Neural Network (RNN) for the data association problem in online multi-target tracking. They use RNNs for target state prediction, and to determine a track's birth death in each frame.
{ "cite_N": [ "@cite_13", "@cite_37", "@cite_18", "@cite_0", "@cite_57", "@cite_23", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "", "1497265063", "", "", "2749203358", "1554825167", "2339473870", "2949120105", "2759692151", "2156547346" ], "abstract": [ "", "Convolutional neural network (CNN) models have demonstrated great success in various computer vision tasks including image classification and object detection. However, some equally important tasks such as visual tracking remain relatively unexplored. We believe that a major hurdle that hinders the application of CNN to visual tracking is the lack of properly labeled training data. While existing applications that liberate the power of CNN often need an enormous amount of training data in the order of millions, visual tracking applications typically have only one labeled example in the first frame of each video. We address this research issue here by pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking. The CNN is also fine-tuned during online tracking to adapt to the appearance of the tracked target specified in the first video frame. To fit the characteristics of object tracking, we first pre-train the CNN to recognize what is an object, and then propose to generate a probability map instead of producing a simple class label. Using two challenging open benchmarks for performance evaluation, our proposed tracker has demonstrated substantial improvement over other state-of-the-art trackers.", "", "", "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.", "Deep neural networks, albeit their great success on feature learning in various computer vision tasks, are usually considered as impractical for online visual tracking, because they require very long training time and a large number of training samples. In this paper, we present an efficient and very robust tracking algorithm using a single convolutional neural network (CNN) for learning effective feature representations of the target object in a purely online manner. Our contributions are multifold. First, we introduce a novel truncated structural loss function that maintains as many training samples as possible and reduces the risk of tracking error accumulation. Second, we enhance the ordinary stochastic gradient descent approach in CNN training with a robust sample selection mechanism. The sampling mechanism randomly generates positive and negative samples from different temporal distributions, which are generated by taking the temporal relations and label noise into account. Finally, a lazy yet effective updating scheme is designed for CNN training. Equipped with this novel updating algorithm, the CNN model is robust to some long-existing difficulties in visual tracking, such as occlusion or incorrect detections, without loss of the effective adaption for significant appearance changes. In the experiment, our CNN tracker outperforms all compared state-of-the-art methods on two recently proposed benchmarks, which in total involve over 60 video sequences. The remarkable performance improvement over the existing trackers illustrates the superiority of the feature representations, which are learned purely online via the proposed deep learning framework.", "We present a novel approach to online multi-target tracking based on recurrent neural networks (RNNs). Tracking multiple objects in real-world scenes involves many challenges, including a) an a-priori unknown and time-varying number of targets, b) a continuous state estimation of all present targets, and c) a discrete combinatorial problem of data association. Most previous methods involve complex models that require tedious tuning of parameters. Here, we propose for the first time, an end-to-end learning approach for online multi-target tracking. Existing deep learning methods are not designed for the above challenges and cannot be trivially applied to the task. Our solution addresses all of the above points in a principled way. Experiments on both synthetic and real data show promising results obtained at 300 Hz on a standard CPU, and pave the way towards future research in this direction.", "Simple Online and Realtime Tracking (SORT) is a pragmatic approach to multiple object tracking with a focus on simple, effective algorithms. In this paper, we integrate appearance information to improve the performance of SORT. Due to this extension we are able to track objects through longer periods of occlusions, effectively reducing the number of identity switches. In spirit of the original framework we place much of the computational complexity into an offline pre-training stage where we learn a deep association metric on a large-scale person re-identification dataset. During online application, we establish measurement-to-track associations using nearest neighbor queries in visual appearance space. Experimental evaluation shows that our extensions reduce the number of identity switches by 45 , achieving overall competitive performance at high frame rates.", "We propose a novel approach based on deep learning for background subtraction from video sequences.A new algorithm to generate background model has been proposed.Input image patches and their corresponding background images are fed into CNN to do background subtraction.We utilized median filter to enhance the segmentation results.Experiments of Change detection results confirm the performance of the proposed approach. In this work, we present a novel background subtraction from video sequences algorithm that uses a deep Convolutional Neural Network (CNN) to perform the segmentation. With this approach, feature engineering and parameter tuning become unnecessary since the network parameters can be learned from data by training a single CNN that can handle various video scenes. Additionally, we propose a new approach to estimate background model from video sequences. For the training of the CNN, we employed randomly 5 video frames and their ground truth segmentations taken from the Change Detection challenge 2014 (CDnet 2014). We also utilized spatial-median filtering as the post-processing of the network outputs. Our method is evaluated with different data-sets, and it (so-called DeepBS) outperforms the existing algorithms with respect to the average ranking over different evaluation metrics announced in CDnet 2014. Furthermore, due to the network architecture, our CNN is capable of real time processing.", "Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset." ] }
1811.04091
2900128981
The task of multiple people tracking in monocular videos is challenging because of the numerous difficulties involved: occlusions, varying environments, crowded scenes, camera parameters and motion. In the tracking-by-detection paradigm, most approaches adopt person re-identification techniques based on computing the pairwise similarity between detections. However, these techniques are less effective in handling long-term occlusions. By contrast, tracklet (a sequence of detections) re-identification can improve association accuracy since tracklets offer a richer set of visual appearance and spatio-temporal cues. In this paper, we propose a tracking framework that employs a hierarchical clustering mechanism for merging tracklets. To this end, tracklet re-identification is performed by utilizing a novel multi-stage deep network that can jointly reason about the visual appearance and spatio-temporal properties of a pair of tracklets, thereby providing a robust measure of affinity. Experimental results on the challenging MOT16 and MOT17 benchmarks show that our method significantly outperforms state-of-the-arts.
Recently, Ma @cite_38 presented a framework that employs a three step process in which tracklets are first created, then cleaved, and then reconnected using a combination of Siamese-trained CNNs, Bi-Gated Recurrent Unit (GRU) and LSTM cells. By contrast, our approach utilizes a hierarchical clustering mechanism with a single multi-stage network to compute tracklet similarity, thereby minimizing false associations in the first step and mitigating the need for tracklet cleaving and reconnection. @cite_49 , a multi-stage network was proposed to model the appearance, motion and interaction of targets. Their network design is similar to ours, but with the key difference that our model computes the similarity between two tracklets, rather than between a tracklet and a single detection.
{ "cite_N": [ "@cite_38", "@cite_49" ], "mid": [ "2796655392", "2951063106" ], "abstract": [ "Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the tracklet images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOT16.", "We present a multi-cue metric learning framework to tackle the popular yet unsolved Multi-Object Tracking (MOT) problem. One of the key challenges of tracking methods is to effectively compute a similarity score that models multiple cues from the past such as object appearance, motion, or even interactions. This is particularly challenging when objects get occluded or share similar appearance properties with surrounding objects. To address this challenge, we cast the problem as a metric learning task that jointly reasons on multiple cues across time. Our framework learns to encode long-term temporal dependencies across multiple cues with a hierarchical Recurrent Neural Network. We demonstrate the strength of our approach by tracking multiple objects using their appearance, motion, and interactions. Our method outperforms previous works by a large margin on multiple publicly available datasets including the challenging MOT benchmark." ] }
1811.03823
2899920407
Strong Stackelberg equilibrium (SSE) is the standard solution concept of Stackelberg security games. As opposed to the weak Stackelberg equilibrium (WSE), the SSE assumes that the follower breaks ties in favor of the leader and this is widely acknowledged and justified by the assertion that the defender can often induce the attacker to choose a preferred action by making an infinitesimal adjustment to her strategy. Unfortunately, in security games with resource assignment constraints, the assertion might not be valid; it is possible that the defender cannot induce the desired outcome. As a result, many results claimed in the literature may be overly optimistic. To remedy, we first formally define the utility guarantee of a defender strategy and provide examples to show that the utility of SSE can be higher than its utility guarantee. Second, inspired by the analysis of leader's payoff by Von Stengel and Zamir (2004), we provide the solution concept called the inducible Stackelberg equilibrium (ISE), which owns the highest utility guarantee and always exists. Third, we show the conditions when ISE coincides with SSE and the fact that in general case, SSE can be extremely worse with respect to utility guarantee. Moreover, introducing the ISE does not invalidate existing algorithmic results as the problem of computing an ISE polynomially reduces to that of computing an SSE. We also provide an algorithmic implementation for computing ISE, with which our experiments unveil the empirical advantage of the ISE over the SSE.
To the best of our knowledge, Okamoto12 are the only exception who have raised the concern of lack of inducibility in security games, though their model is a very specific type of network security games that cannot be generalized to standard security games, especially games with scheduling constraints. Besides that, the more important question regarding the overoptimism due to the lack of inducibility and the algorithmic remedies needed for such overoptimism were left unanswered (in particular, the solution algorithm proposed by Okamoto12 only converges to a local optimum even only in their setting). These questions are addressed in the affirmative in this paper. The concept of in our paper (Definition ) is inspired by first proposed by von Stengel and Zamir in their study of general Stackelberg games. However, the focus of their work was solely on characterizing the range of leader's utility in Stackelberg equilibria with the aim of confirming the advantage of commitment @cite_21 @cite_22 . Some other works considered potential deviation of the attacker from their optimal responses and proposed solution concepts that were robust to these deviations @cite_24 @cite_8 @cite_9 . Our work differs from this line of research in that we consider perfectly rational attackers.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_21", "@cite_24" ], "mid": [ "2166294754", "2396886722", "378366556", "1643208677", "2119249434" ], "abstract": [ "A basic model of commitment is to convert a two-player game in strategic form to a “leadership game” with the same payoffs, where one player, the leader, commits to a strategy, to which the second player always chooses a best reply. This paper studies such leadership games for games with convex strategy sets. We apply them to mixed extensions of finite games, which we analyze completely, including nongeneric games. The main result is that leadership is advantageous in the sense that, as a set, the leader's payoffs in equilibrium are at least as high as his Nash and correlated equilibrium payoffs in the simultaneous game. We also consider leadership games with three or more players, where most conclusions no longer hold.", "Illegal poaching is an international problem that leads to the extinction of species and the destruction of ecosystems. As evidenced by dangerously dwindling populations of endangered species, existing anti-poaching mechanisms are insufficient. This paper introduces the Protection Assistant for Wildlife Security (PAWS) application - a joint deployment effort done with researchers at Uganda's Queen Elizabeth National Park (QENP) with the goal of improving wildlife ranger patrols. While previous works have deployed applications with a game-theoretic approach (specifically Stackelberg Games) for counter-terrorism, wildlife crime is an important domain that promotes a wide range of new deployments. Additionally, this domain presents new research challenges and opportunities related to learning behavioral models from collected poaching data. In addressing these challenges, our first contribution is a behavioral model extension that captures the heterogeneity of poachers' decision making processes. Second, we provide a novel framework, PAWS-Learn, that incrementally improves the behavioral model of the poacher population with more data. Third, we develop a new algorithm, PAWS-Adapt, that adaptively improves the resource allocation strategy against the learned model of poachers. Fourth, we demonstrate PAWS's potential effectiveness when applied to patrols in QENP, where PAWS will be deployed.", "Recent deployments of Stackelberg security games (SSG) have led to two competing approaches to handle boundedly rational human adversaries: (1) integrating models of human (adversary) decision-making into the game-theoretic algorithms, and (2) applying robust optimization techniques that avoid adversary modeling. A recent algorithm (MATCH) based on the second approach was shown to outperform the leading modeling-based algorithm even in the presence of significant amount of data. Is there then any value in using human behavior models in solving SSGs? Through extensive experiments with 547 human subjects playing 11102 games in total, we emphatically answer the question in the affirmative, while providing the following key contributions: (i) we show that our algorithm, SU-BRQR, based on a novel integration of human behavior model with the subjective utility function, significantly outperforms both MATCH and its improvements; (ii) we are the first to present experimental results with security intelligence experts, and find that even though the experts are more rational than the Amazon Turk workers, SU-BRQR still outperforms an approach assuming perfect rationality (and to a more limited extent MATCH); (iii) we show the advantage of SU-BRQR in a new, large game setting and demonstrate that sufficient data enables it to improve its performance over MATCH.", "A basic model of commitment is to convert a game in strategic form into a “leadership game” where one player commits to a strategy to which the other player chooses a best response, with payoffs as in the original game. This paper studies subgame perfect equilibria of such leadership games for the mixed extension of a finite game, where the leader commits to a mixed strategy. In a generic two-player game, the leader payoff is unique and at least as large as any Nash payoff in the original simultaneous game. In non-generic two-player games, which are completely analyzed, the leader payoffs may form an interval, which as a set of payoffs is never worse than the Nash payoffs for the player who has the commitment power. Furthermore, the set of payoffs to the leader is also at least as good as the set of correlated equilibrium payoffs. These observations no longer hold in leadership games with three or more players. The possible payoffs to the follower are shown to be arbitrary compared to the simultaneous game or the game where the players switch their roles of leader and follower. Curiously, the follower payoff is not so arbitrary in typical", "How do we build multiagent algorithms for agent interactions with human adversaries? Stackelberg games are natural models for many important applications that involve human interaction, such as oligopolistic markets and security domains. In Stackelberg games, one player, the leader, commits to a strategy and the follower makes their decision with knowledge of the leader's commitment. Existing algorithms for Stackelberg games efficiently find optimal solutions (leader strategy), but they critically assume that the follower plays optimally. Unfortunately, in real-world applications, agents face human followers (adversaries) who --- because of their bounded rationality and limited observation of the leader strategy --- may deviate from their expected optimal response. Not taking into account these likely deviations when dealing with human adversaries can cause an unacceptable degradation in the leader's reward, particularly in security applications where these algorithms have seen real-world deployment. To address this crucial problem, this paper introduces three new mixed-integer linear programs (MILPs) for Stackelberg games to consider human adversaries, incorporating: (i) novel anchoring theories on human perception of probability distributions and (ii) robustness approaches for MILPs to address human imprecision. Since these new approaches consider human adversaries, traditional proofs of correctness or optimality are insufficient; instead, it is necessary to rely on empirical validation. To that end, this paper considers two settings based on real deployed security systems, and compares 6 different approaches (three new with three previous approaches), in 4 different observability conditions, involving 98 human subjects playing 1360 games in total. The final conclusion was that a model which incorporates both the ideas of robustness and anchoring achieves statistically significant better rewards and also maintains equivalent or faster solution speeds compared to existing approaches." ] }
1811.03925
2899996857
Most existing methods determine relation types only after all the entities have been recognized, thus the interaction between relation types and entity mentions is not fully modeled. This paper presents a novel paradigm to deal with relation extraction by regarding the related entities as the arguments of a relation. We apply a hierarchical reinforcement learning (HRL) framework in this paradigm to enhance the interaction between entity mentions and relation types. The whole extraction process is decomposed into a hierarchy of two-level RL policies for relation detection and entity extraction respectively, so that it is more feasible and natural to deal with overlapping relations. Our model was evaluated on public datasets collected via distant supervision, and results show that it gains better performance than existing methods and is more powerful for extracting overlapping relations.
Traditional pipelined approaches treat entity extraction and relation classification as two separate tasks @cite_2 @cite_23 @cite_26 . They first extract the token spans in the text to detect entity mentions, and then discover the relational structures between entity mentions. Although it is flexible to build pipelined methods, these methods suffer from since downstream modules are largely affected by the errors introduced by upstream modules.
{ "cite_N": [ "@cite_26", "@cite_23", "@cite_2" ], "mid": [ "1888005072", "281284504", "2107598941" ], "abstract": [ "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .", "Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.", "Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression." ] }
1811.03925
2899996857
Most existing methods determine relation types only after all the entities have been recognized, thus the interaction between relation types and entity mentions is not fully modeled. This paper presents a novel paradigm to deal with relation extraction by regarding the related entities as the arguments of a relation. We apply a hierarchical reinforcement learning (HRL) framework in this paradigm to enhance the interaction between entity mentions and relation types. The whole extraction process is decomposed into a hierarchy of two-level RL policies for relation detection and entity extraction respectively, so that it is more feasible and natural to deal with overlapping relations. Our model was evaluated on public datasets collected via distant supervision, and results show that it gains better performance than existing methods and is more powerful for extracting overlapping relations.
To address this problem, a variety of joint learning methods was proposed. kate2010joint proposed a card-pyramid graph structure for joint extraction, and hoffmann2011knowledge developed graph-based multi-instance learning algorithms. However, the two methods both applied a greedy search strategy to reduce the exploration space aggressively, which limits the performance. Other studies employed a structured learning approach @cite_1 @cite_8 . All these models depend on , which requires much manual efforts and domain expertise. On the other hand, bjorne2011extracting proposed to first extract , which refer to a phrase that expresses the occurrence of a relation in a sentence, and then determine their arguments to reduce the task complexity. Open IE systems @cite_20 identifies relational phrases using lexical constraints, which also follows a relation"-first, argument"-second approach. But there are many cases where no relation trigger appears in a sentence so that such relations cannot be captured in these methods.
{ "cite_N": [ "@cite_1", "@cite_20", "@cite_8" ], "mid": [ "2134033474", "2167187514", "2251091211" ], "abstract": [ "We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE) 1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.", "Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-of-the-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos. More than 30 of ReVerb's extractions are at precision 0.8 or higher---compared to virtually none for earlier systems. The paper concludes with a detailed analysis of ReVerb's errors, suggesting directions for future work.", "This paper proposes a history-based structured learning approach that jointly extracts entities and relations in a sentence. We introduce a novel simple and flexible table representation of entities and relations. We investigate several feature settings, search orders, and learning methods with inexact search on the table. The experimental results demonstrate that a joint learning approach significantly outperforms a pipeline approach by incorporating global features and by selecting appropriate learning methods and search orders." ] }
1811.03853
2900393186
Direct policy search is one of the most important algorithm of reinforcement learning. However, learning from scratch needs a large amount of experience data and can be easily prone to poor local optima. In addition to that, a partially trained policy tends to perform dangerous action to agent and environment. In order to overcome these challenges, this paper proposed a policy initialization algorithm called Policy Learning based on Completely Behavior Cloning (PLCBC). PLCBC first transforms the Model Predictive Control (MPC) controller into a piecewise affine (PWA) function using multi-parametric programming, and uses a neural network to express this function. By this way, PLCBC can completely clone the MPC controller without any performance loss, and is totally training-free. The experiments show that this initialization strategy can help agent learn at the high reward state region, and converge faster and better.
Broadly speaking, deep reinforcement learning can be roughly classified into two categories. First, reward based methods, including deep Q learning @cite_23 , policy gradient algorithm @cite_31 . Second, imitation based methods, including naive supervised learning, Dataset Aggregation (DAgger) @cite_10 , Guided Policy Search (GPS) @cite_26 .
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_10", "@cite_23" ], "mid": [ "2173248099", "2121103318", "2735089625", "1757796397" ], "abstract": [ "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.", "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them." ] }
1811.03796
2897634076
Abstract Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on. Most existing relation extractors make predictions for each entity pair locally and individually, while ignoring implicit global clues available across different entity pairs and in the knowledge base, which often leads to conflicts among local predictions from different entity pairs. This paper proposes a joint inference framework that employs such global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Those constraints can be examined in either hard style or soft style, both of which can be effectively explored in an integer linear program formulation. Experimental results on both English and Chinese datasets show that our proposed framework can effectively utilize those two categories of global clues and resolve the disagreements among local predictions, thus improve various relation extractors when such clues are applicable to the datasets. Our experiments also indicate that the clues learnt automatically from existing knowledge bases perform comparably to or better than those refined by human.
Since traditional supervised relation extraction methods @cite_11 @cite_15 @cite_21 require manual annotations and are often domain-specific, nowadays many efforts focus on open information extraction, which can extract hundreds of thousands of relations from large scale of web texts using semi-supervised or unsupervised methods @cite_28 @cite_35 @cite_31 @cite_12 @cite_23 @cite_27 . However, these relations are often not canonicalized, therefore are difficult to be mapped to an existing KB.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_28", "@cite_21", "@cite_27", "@cite_23", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2167187514", "2161494021", "2127978399", "910440858", "1512387364", "2152380671", "2146191280", "2159750428", "1502749598" ], "abstract": [ "Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-of-the-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos. More than 30 of ReVerb's extractions are at precision 0.8 or higher---compared to virtually none for earlier systems. The paper concludes with a detailed analysis of ReVerb's errors, suggesting directions for future work.", "Information-extraction (IE) systems seek to distill semantic relations from natural-language text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors -- using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "The goal of relation extraction is to detect relations between two entities in free text. In a sentence, a relation instance usually comprises a small number of words, which yields a sparse feature representation. To make better use of limited information in a relation instance, parsing trees and combined features are employed widely to capture the local dependencies of relation instances. However, the performance of parsing tree-based systems is often degraded by chunking or parsing errors. Combined features are used widely, but few studies have addressed how features can be combined to achieve optimal performance. Thus, in this study, we propose a feature assembly method for relation extraction. Six types of candidate features (head noun, POS tag, n-gram, omni-word, etc.) are employed as atomic features and six constraint conditions (singleton, position, syntax, etc.) are used to combine these features in different settings. Depending on the utilization of candidate features, different constraint conditions can be explored to achieve the optimal extraction performance. Our method is effective for capturing local dependencies and it reduces the errors caused by inaccurate parsing. We tested the proposed method using the ACE 2005 Chinese and English corpora, and it achieved state-of-the-art performance, where it was significantly superior to existing methods.", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.", "Traditional relation extraction seeks to identify pre-specified semantic relations within natural language text, while open Information Extraction (Open IE) takes a more general approach, and looks for a variety of relations without restriction to a fixed relation set. With this generalization comes the question, what is a relation? For example, should the more general task be restricted to relations mediated by verbs, nouns, or both? To help answer this question, we propose two levels of subtasks for Open IE. One task is to determine if a sentence potentially contains a relation between two entities? The other task looks to confirm explicit relation words for two entities. We propose multiple SVM models with dependency tree kernels for both tasks. For explicit relation extraction, our system can extract both noun and verb relations. Our results on three datasets show that our system is superior when compared to state-of-the-art systems like REVERB and OLLIE for both tasks. For example, in some experiments our system achieves 33 improvement on nominal relation extraction over OLLIE. In addition we propose an unsupervised rule-based approach which can serve as a strong baseline for Open IE systems.", "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.", "Open Relation Extraction (ORE) overcomes the limitations of traditional IE techniques, which train individual extractors for every single relation type. Systems such as ReVerb, PATTY, OLLIE, and Exemplar have attracted much attention on English ORE. However, few studies have been reported on ORE for languages beyond English. This paper presents a syntax-based Chinese (Zh) ORE system, ZORE, for extracting relations and semantic patterns from Chinese text. ZORE identifies relation candidates from automatically parsed dependency trees, and then extracts relations with their semantic patterns iteratively through a novel double propagation algorithm. Empirical results on two data sets show the effectiveness of the proposed system.", "One of the central knowledge sources of an information extraction (IE) system IS a dictionary of linguistic patterns that can be used to identify references to relevant information in a text Automatic creation of conceptual dictionaries is important for portability and scalability of an IE system This paper describes CRYSTAL, a system which automatically induces a dictionary of \"concept-node definitions\" sufficient to identify relevant information from a training corpus Each of these concept-node definitions is generalized as far as possible without producing errors, so that a minimum number of dictionary entries cover the positive training instances Because it tests the accuracy of each proposed definition, CRYSTAL can often surpass human intuitions in creating reliable extraction rules." ] }
1811.03796
2897634076
Abstract Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on. Most existing relation extractors make predictions for each entity pair locally and individually, while ignoring implicit global clues available across different entity pairs and in the knowledge base, which often leads to conflicts among local predictions from different entity pairs. This paper proposes a joint inference framework that employs such global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Those constraints can be examined in either hard style or soft style, both of which can be effectively explored in an integer linear program formulation. Experimental results on both English and Chinese datasets show that our proposed framework can effectively utilize those two categories of global clues and resolve the disagreements among local predictions, thus improve various relation extractors when such clues are applicable to the datasets. Our experiments also indicate that the clues learnt automatically from existing knowledge bases perform comparably to or better than those refined by human.
Distant supervision (DS) is a semi-supervised relation extraction framework, which can automatically construct training data by aligning the triples in a KB to the sentences which contain their subjects and objects, and this learning paradigm has attracted much attention in information extraction tasks @cite_8 @cite_16 @cite_10 @cite_29 @cite_20 @cite_2 @cite_25 @cite_5 . DS approaches can predict canonicalized relations (predefined in a KB) for large amount of data and do not need much human involvement. Since the automatically generated training datasets in DS often contain noises, there are also research efforts focusing on reducing the noisy labels in the training data @cite_33 @cite_0 , or utilizing human annotated data to improve the performance @cite_19 @cite_13 . Most of the above works put their emphasis on resolving or reducing the noises in the DS training data, but mostly focus on the extraction models themselves, i.e., improving the local extractors, while ignoring the inconsistencies among many local predictions.
{ "cite_N": [ "@cite_13", "@cite_33", "@cite_8", "@cite_29", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2251847161", "2149713870", "2150588363", "2401642934", "2250265269", "2251832915", "174427690", "2515462165", "2107598941", "2145453687", "2251135946", "2132679783" ], "abstract": [ "Broad-coverage relation extraction either requires expensive supervised training data, or suffers from drawbacks inherent to distant supervision. We present an approach for providing partial supervision to a distantly supervised relation extractor using a small number of carefully selected examples. We compare against established active learning criteria and propose a novel criterion to sample examples which are both uncertain and representative. In this way, we combine the benefits of fine-grained supervision for difficult examples with the coverage of a large distantly supervised corpus. Our approach gives a substantial increase of 3.9 endto-end F1 on the 2013 KBP Slot Filling evaluation, yielding a net F1 of 37.7 .", "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.", "We present a new approach to relation extraction that requires only a handful of training examples. Given a few pairs of named entities known to exhibit or not exhibit a particular relation, bags of sentences containing the pairs are extracted from the web. We extend an existing relation extraction method to handle this weaker form of supervision, and present experimental results demonstrating that our approach can reliably extract relations from web documents.", "This paper describes the design and implementation of the slot filling system prepared by Stanford’s natural language processing group for the 2010 Knowledge Base Population (KBP) track at the Text Analysis Conference (TAC). Our system relies on a simple distant supervision approach using mainly resources furnished by the track organizers: we used slot examples from the provided knowledge base, which we mapped to documents from several corpora, i.e., those distributed by the organizers, Wikipedia, and web snippets. Our implementation attained the median rank among all participating systems.", "Distant supervision has attracted recent interest for training information extraction systems because it does not require any human annotation but rather employs existing knowledge bases to heuristically label a training corpus. However, previous work has failed to address the problem of false negative training examples mislabeled due to the incompleteness of knowledge bases. To tackle this problem, we propose a simple yet novel framework that combines a passage retrieval model using coarse features into a state-of-the-art relation extractor using multi-instance learning with fine features. We adapt the information retrieval technique of pseudorelevance feedback to expand knowledge bases, assuming entity pairs in top-ranked passages are more likely to express a relation. Our proposed technique significantly improves the quality of distantly supervised relation extraction, boosting recall from 47.7 to 61.2 with a consistently high level of precision of around 93 in the experiments.", "Distant supervision usually utilizes only unlabeled data and existing knowledge bases to learn relation extraction models. However, in some cases a small amount of human labeled data is available. In this paper, we demonstrate how a state-of-theart multi-instance multi-label model can be modified to make use of these reliable sentence-level labels in addition to the relation-level distant supervision from a database. Experiments show that our approach achieves a statistically significant increase of 13.5 in F-score and 37 in area under the precision recall curve.", "Distant supervision for relation extraction (RE) -- gathering training data by aligning a database of facts with text -- is an efficient approach to scale RE to thousands of different relations. However, this introduces a challenging learning scenario where the relation expressed by a pair of entities found in a sentence is unknown. For example, a sentence containing Balzac and France may express BornIn or Died, an unknown relation, or no relation at all. Because of this, traditional supervised learning, which assumes that each example is explicitly mapped to a label, is not appropriate. We propose a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables. Our model performs competitively on two difficult domains.", "", "Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.", "We present a novel approach to relation extraction that integrates information across documents, performs global inference and requires no labelled text. In particular, we tackle relation extraction and entity identification jointly. We use distant supervision to train a factor graph model for relation extraction based on an existing knowledge base (Freebase, derived in parts from Wikipedia). For inference we run an efficient Gibbs sampler that leads to linear time joint inference. We evaluate our approach both for an indomain (Wikipedia) and a more realistic out-of-domain (New York Times Corpus) setting. For the in-domain setting, our joint model leads to 4 higher precision than an isolated local approach, but has no advantage over a pipeline. For the out-of-domain data, we benefit strongly from joint modelling, and observe improvements in precision of 13 over the pipeline, and 15 over the isolated baseline.", "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.", "Information extraction (IE) holds the promise of generating a large-scale knowledge base from the Web's natural language text. Knowledge-based weak supervision, using structured data to heuristically label a training corpus, works towards this goal by enabling the automated learning of a potentially unbounded number of relation extractors. Recently, researchers have developed multi-instance learning algorithms to combat the noisy training data that can come from heuristic labeling, but their models assume relations are disjoint --- for example they cannot extract the pair Founded(Jobs, Apple) and CEO-of(Jobs, Apple). This paper presents a novel approach for multi-instance learning with overlapping relations that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts. We apply our model to learn extractors for NY Times text using weak supervision from Free-base. Experiments show that the approach runs quickly and yields surprising gains in accuracy, at both the aggregate and sentence level." ] }
1811.03796
2897634076
Abstract Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on. Most existing relation extractors make predictions for each entity pair locally and individually, while ignoring implicit global clues available across different entity pairs and in the knowledge base, which often leads to conflicts among local predictions from different entity pairs. This paper proposes a joint inference framework that employs such global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Those constraints can be examined in either hard style or soft style, both of which can be effectively explored in an integer linear program formulation. Experimental results on both English and Chinese datasets show that our proposed framework can effectively utilize those two categories of global clues and resolve the disagreements among local predictions, thus improve various relation extractors when such clues are applicable to the datasets. Our experiments also indicate that the clues learnt automatically from existing knowledge bases perform comparably to or better than those refined by human.
In recent years, neural networks (NN) based models, such as PCNN @cite_25 , have been utilized in the relation extraction task, and the attention mechanism is also adopted to further reduce the noises within a sentence bag (that is, all the sentences containing an entity pair) @cite_5 . @cite_6 exploit class ties between relations within one entity tuple, and obtain promising results. However, those approaches still pay less attention to exploiting the possible dependencies between relations globally among all entity pairs. In contrast, our framework learns implicit clues from existing KBs, and jointly optimizes local predictions among different entity tuples to capture both relation argument type clues and cardinality clues. Specifically, this framework can lift various existing extractors, including traditional extractors and NN extractors.
{ "cite_N": [ "@cite_5", "@cite_25", "@cite_6" ], "mid": [ "2515462165", "2251135946", "2585908559" ], "abstract": [ "", "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.", "Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network (CNN) with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR (not relation) for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving state-of-the-art performance." ] }
1811.03796
2897634076
Abstract Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on. Most existing relation extractors make predictions for each entity pair locally and individually, while ignoring implicit global clues available across different entity pairs and in the knowledge base, which often leads to conflicts among local predictions from different entity pairs. This paper proposes a joint inference framework that employs such global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Those constraints can be examined in either hard style or soft style, both of which can be effectively explored in an integer linear program formulation. Experimental results on both English and Chinese datasets show that our proposed framework can effectively utilize those two categories of global clues and resolve the disagreements among local predictions, thus improve various relation extractors when such clues are applicable to the datasets. Our experiments also indicate that the clues learnt automatically from existing knowledge bases perform comparably to or better than those refined by human.
There are also works which first represent relations and entities as embeddings in a KB, and then utilize those embeddings to predict missing relations between any pair of entities in the KB @cite_37 @cite_24 . This task setup is different from ours, since we focus on extracting relations between entity pairs from the text resources, while they mainly make use of the structure information and descriptions of a KB to learn latent representations.
{ "cite_N": [ "@cite_24", "@cite_37" ], "mid": [ "2499696929", "2951045955" ], "abstract": [ "Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low-dimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https: github.com xrb92 DKRL.", "Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77 in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data." ] }
1811.03796
2897634076
Abstract Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on. Most existing relation extractors make predictions for each entity pair locally and individually, while ignoring implicit global clues available across different entity pairs and in the knowledge base, which often leads to conflicts among local predictions from different entity pairs. This paper proposes a joint inference framework that employs such global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Those constraints can be examined in either hard style or soft style, both of which can be effectively explored in an integer linear program formulation. Experimental results on both English and Chinese datasets show that our proposed framework can effectively utilize those two categories of global clues and resolve the disagreements among local predictions, thus improve various relation extractors when such clues are applicable to the datasets. Our experiments also indicate that the clues learnt automatically from existing knowledge bases perform comparably to or better than those refined by human.
The idea of global optimization over local predictions has been proven to be helpful in other information extraction tasks. @cite_9 and @cite_26 use co-occurrence statistics among relations or events to jointly improve information extraction performances in ACE tasks, while we mine existing knowledge bases to collect global clues to solve local conflicts and find the optimal aggregation assignments, regarding existing knowledge facts. There are also works which encode general domain knowledge as first order logic rules in a topic model @cite_32 . The main differences between their approach and our work are that our global clues can be collected from knowledge bases and our instantiated constraints are directly operated in an ILP model.
{ "cite_N": [ "@cite_9", "@cite_26", "@cite_32" ], "mid": [ "2042610832", "2165962657", "2975033672" ], "abstract": [ "Previous information extraction (IE) systems are typically organized as a pipeline architecture of separated stages which make independent local decisions. When the data grows beyond some certain size, the extracted facts become inter-dependent and thus we can take advantage of information redundancy to conduct reasoning across documents and improve the performance of IE. We describe a joint inference approach based on information network structure to conduct cross-fact reasoning with an integer linear programming framework. Without using any additional labeled data this new method obtained 13.7 -24.4 user browsing cost reduction over a state-of-the-art IE system which extracts various types of facts independently.", "Traditional approaches to the task of ACE event extraction usually rely on sequential pipelines with multiple stages, which suffer from error propagation since event triggers and arguments are predicted in isolation by independent local classifiers. By contrast, we propose a joint framework based on structured prediction which extracts triggers and arguments together so that the local predictions can be mutually improved. In addition, we propose to incorporate global features which explicitly capture the dependencies of multiple triggers and arguments. Experimental results show that our joint approach with local features outperforms the pipelined baseline, and adding global features further improves the performance significantly. Our approach advances state-ofthe-art sentence-level event extraction, and even outperforms previous argument labeling methods which use external knowledge from other sentences and documents.", "" ] }
1811.03933
2900026719
We study the vector Gaussian CEO problem under logarithmic loss distortion measure. Specifically, @math agents observe independent noisy versions of a remote vector Gaussian source, and communicate independently with a decoder over rate-constrained noise-free links. The CEO also has its own Gaussian noisy observation of the source and wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. We find an explicit characterization of the rate-distortion region of this model. For the proof of this result, we first extend Courtade-Weissman's result on the rate-distortion region of the DM @math -encoder CEO problem to the case in which the CEO has access to a correlated side information stream which is such that the agents' observations are independent conditionally given the side information and remote source. Next, we obtain an outer bound on the region of the vector Gaussian CEO problem by evaluating the outer bound of the DM model by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. The approach is similar to Ekrem-Ulukus outer bounding technique for the vector Gaussian CEO problem under quadratic distortion measure, for which it was there found generally non-tight; but it is shown here to yield a complete characterization of the region for the case of logarithmic loss measure. Also, we show that Gaussian test channels with time-sharing exhaust the Berger-Tung inner bound, which is optimal. Furthermore, application of our results allows us to find the complete solutions of three related problems: the vector Gaussian distributed hypothesis testing against conditional independence problem, a quadratic vector Gaussian CEO problem with determinant constraint, and the vector Gaussian distributed Information Bottleneck problem.
Logarithmic loss is also instrumental in problems of data compression under a mutual information constraint @cite_10 , and problems of relaying with relay nodes that are constrained not to know the users' codebooks (sometimes termed oblivious" or nomadic processing) which is studied in the single user case first by Sanderovich in @cite_0 and then by Simeone in @cite_59 , and in the multiple user multiple relay case by Aguerri in @cite_14 and @cite_2 . Other applications in which the logarithmic loss function can be used include secrecy and privacy @cite_42 @cite_8 , hypothesis testing against independence @cite_49 @cite_46 @cite_16 @cite_31 @cite_11 and others.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_8", "@cite_42", "@cite_0", "@cite_59", "@cite_2", "@cite_49", "@cite_46", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2136288990", "2582181367", "", "1995189342", "2166266048", "2099828508", "2514483739", "2021547642", "1994846830", "1974086310", "2157295235", "" ], "abstract": [ "We investigate two closely related successive refinement (SR) coding problems: 1) In the hypothesis testing (HT) problem, bivariate hypothesis H0:PXY against H1: PXPY, i.e., test against independence is considered. One remote sensor collects data stream X and sends summary information, constrained by SR coding rates, to a decision center which observes data stream Y directly. 2) In the one-helper (OH) problem, X and Y are encoded separately and the receiver seeks to reconstruct Y losslessly. Multiple levels of coding rates are allowed at the two sensors, and the transmissions are performed in an SR manner. We show that the SR-HT rate-error-exponent region and the SR-OH rate region can be reduced to essentially the same entropy characterization form. Single-letter solutions are thus provided in a unified fashion, and the connection between them is discussed. These problems are also related to the information bottleneck (IB) problem, and through this connection we provide a straightforward operational meaning for the IB method. Connection to the pattern recognition problem, the notion of successive refinability, and two specific sources are also discussed. A strong converse for the SR-HT problem is proved by generalizing the image size characterization method, which shows the optimal type-two error exponents under constant type-one error constraints are independent of the exact values of those constants.", "We study the transmission over a network in which users send information to a remote destination through relay nodes that are connected to the destination via finite-capacity error-free links, i.e., a cloud radio access network. The relays are constrained to operate without knowledge of the users' codebooks, i.e., they perform oblivious processing. The destination, or central processor, however, is informed about the users' codebooks. We establish a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels in which the outputs at the relay nodes are independent given the users' inputs. We show that both relaying a-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and \"noisy network coding\", are optimal. The proof of the converse part establishes, and utilizes, connections with the Chief Executive Officer (CEO) source coding problem under logarithmic loss distortion measure. Extensions to general discrete memoryless channels are also investigated. In this case, we establish inner and outer bounds on the capacity region. For memoryless Gaussian channels within the studied class of channels, we characterize the capacity region when the users are constrained to time-share among Gaussian codebooks. Furthermore, we also discuss the suboptimality of separate decompression-decoding and the role of time-sharing.", "", "We consider secure multi-terminal source coding problems in the presence of a public helper. Two main scenarios are studied: 1) source coding with a helper where the coded side information from the helper is eavesdropped by an external eavesdropper and 2) triangular source coding with a helper where the helper is considered as a public terminal. We are interested in how the helper can support the source transmission subject to a constraint on the amount of information leaked due to its public nature. We characterize the tradeoff among transmission rate, incurred distortion, and information leakage rate at the helper eavesdropper in the form of the rate-distortion-leakage region for various classes of problems.", "The problem of a nomadic terminal sending information to a remote destination via agents with lossless connections to the destination is investigated. Such a setting suits, e.g., access points of a wireless network where each access point is connected by a wire to a wireline-based network. The Gaussian codebook capacity for the case where the agents do not have any decoding ability is characterized for the Gaussian channel. This restriction is demonstrated to be severe, and allowing the nomadic transmitter to use other signaling improves the rate. For both general and degraded discrete memoryless channels, lower and upper bounds on the capacity are derived. An achievable rate with unrestricted agents, which are capable of decoding, is also given and then used to characterize the capacity for the deterministic channel.", "A standard assumption in network information theory is that all nodes are informed at all times of the operations carried out (e.g., of the codebooks used) by any other terminal in the network. In this paper, information theoretic limits are sought under the assumption that, instead, some nodes are not informed about the codebooks used by other terminals. Specifically, capacity results are derived for a relay channel in which the relay is oblivious to the codebook used by the source (oblivious relaying), and an interference relay channel with oblivious relaying and in which each destination is possibly unaware of the codebook used by the interfering source (interference-oblivious decoding). Extensions are also discussed for a related scenario with standard codebook-aware relaying but interference-oblivious decoding. The class of channels under study is limited to out-of-band (or “primitive”) relaying: Relay-to-destinations links use orthogonal resources with respect to the transmission from the source encoders. Conclusions are obtained under a rigorous definition of oblivious processing that is related to the idea of randomized encoding. The framework and results discussed in this paper suggest that imperfect codebook information can be included as a source of uncertainty in network design along with, e.g., imperfect channel and topology information.", "This paper investigates the compress-and-forward scheme for an uplink cloud radio access network (C-RAN) model, where multi-antenna base stations (BSs) are connected to a cloud-computing-based central processor (CP) via capacity-limited fronthaul links. The BSs compress the received signals with Wyner-Ziv coding and send the representation bits to the CP; the CP performs the decoding of all the users’ messages. Under this setup, this paper makes progress toward the optimal structure of the fronthaul compression and CP decoding strategies for the compress-and-forward scheme in the C-RAN. On the CP decoding strategy design, this paper shows that under a sum fronthaul capacity constraint, a generalized successive decoding strategy of the quantization and user message codewords that allows arbitrary interleaved order at the CP achieves the same rate region as the optimal joint decoding. Furthermore, it is shown that a practical strategy of successively decoding the quantization codewords first, then the user messages, achieves the same maximum sum rate as joint decoding under individual fronthaul constraints. On the joint optimization of user transmission and BS quantization strategies, this paper shows that if the input distributions are assumed to be Gaussian, then under joint decoding, the optimal quantization scheme for maximizing the achievable rate region is Gaussian. Moreover, Gaussian input and Gaussian quantization with joint decoding achieve to within a constant gap of the capacity region of the Gaussian multiple-input multiple-output (MIMO) uplink C-RAN model. Finally, this paper addresses the computational aspect of optimizing uplink MIMO C-RAN by showing that under fixed Gaussian input, the sum rate maximization problem over the Gaussian quantization noise covariance matrices can be formulated as convex optimization problems, thereby facilitating its efficient solution.", "A new class of statistical problems is introduced, involving the presence of communication constraints on remotely collected data. Bivariate hypothesis testing, H_ 0 : P_ XY against H_ 1 : P_ = XY , is considered when the statistician has direct access to Y data but can be informed about X data only at a preseribed finite rate R . For any fixed R the smallest achievable probability of an error of type 2 with the probability of an error of type 1 being at most is shown to go to zero with an exponential rate not depending on as the sample size goes to infinity. A single-letter formula for the exponent is given when P_ = XY = P_ X P_ Y (test against independence), and partial results are obtained for general P_ = XY . An application to a search problem of Chernoff is also given.", "The multiterminal hypothesis testing H: XY against H: XY is considered where X^ n (X^ n ) and Y^ n (Y^ n ) are separately encoded at rates R_ 1 and R_ 2 , respectively. The problem is to determine the minimum n of the second kind of error probability, under the condition that the first kind of error probability n for a prescribed 0 . A good lower bound L (R_ 1 , R_ 2 ) on the power exponent (R_ 1 , R_ 2 , )= n (-1 n n ) is given and several interesting properties are revealed. The lower bound is tighter than that of Ahlswede and Csiszar. Furthermore, in the special case of testing against independence, this bound turns out to coincide with that given by them. The main arguments are devoted to the special case with R_ 2 = corresponding to full side information for Y^ n (Y^ n ) . In particular, the compact solution is established to the complete data compression cases, which are useful in statistics from the practical point of view.", "We study a hypothesis testing problem in which data are compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable encoding rates and exponents of the type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and are often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in the binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to “test against conditional independence.” We then use this optimality result to give an outer bound for a more general class of instances of the problem.", "Let X , Y , Z be zero-mean, jointly Gaussian random vectors of dimensions nx, ny, and nz, respectively. Let P be the set of random variables W such that W harr Y harr (X, Z) is a Markov string. We consider the following optimization problem: WisinP min I(Y; Z) subject to one of the following two possible constraints: 1) I(X; W|Z) ges RI, and 2) the mean squared error between X and Xcirc = E(X|W, Z) is less than d . The problem under the first kind of constraint is motivated by multiple-input multiple-output (MIMO) relay channels with an oblivious transmitter and a relay connected to the receiver through a dedicated link, while for the second case, it is motivated by source coding with decoder side information where the sensor observation is noisy. In both cases, we show that jointly Gaussian solutions are optimal. Moreover, explicit water filling interpretations are given for both cases, which suggest transform coding approaches performed in different transform domains, and that the optimal solution for one problem is, in general, suboptimal for the other.", "" ] }
1811.03933
2900026719
We study the vector Gaussian CEO problem under logarithmic loss distortion measure. Specifically, @math agents observe independent noisy versions of a remote vector Gaussian source, and communicate independently with a decoder over rate-constrained noise-free links. The CEO also has its own Gaussian noisy observation of the source and wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. We find an explicit characterization of the rate-distortion region of this model. For the proof of this result, we first extend Courtade-Weissman's result on the rate-distortion region of the DM @math -encoder CEO problem to the case in which the CEO has access to a correlated side information stream which is such that the agents' observations are independent conditionally given the side information and remote source. Next, we obtain an outer bound on the region of the vector Gaussian CEO problem by evaluating the outer bound of the DM model by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. The approach is similar to Ekrem-Ulukus outer bounding technique for the vector Gaussian CEO problem under quadratic distortion measure, for which it was there found generally non-tight; but it is shown here to yield a complete characterization of the region for the case of logarithmic loss measure. Also, we show that Gaussian test channels with time-sharing exhaust the Berger-Tung inner bound, which is optimal. Furthermore, application of our results allows us to find the complete solutions of three related problems: the vector Gaussian distributed hypothesis testing against conditional independence problem, a quadratic vector Gaussian CEO problem with determinant constraint, and the vector Gaussian distributed Information Bottleneck problem.
Recently, the achievable rate-distortion region of the two-encoder multiterminal source coding problem was completely characterized by Courtade and Weissman [Theorem 6] CW14 for another important special case, that of a logarithmic distortion measure. More precisely, in @cite_20 Courtade and Weissman study the model of Figure side information, i.e., @math , and with the decoder restricted to generate 'soft' estimates of the sources sequences. Namely, @math , @math , is a probability distribution on the alphabet @math of source @math and @math is the relative entropy (i.e., Kullback-Leibler divergence) between the empirical distribution of the event @math and the estimate @math . Using a particularly useful property of the logarithmic loss distortion measure, which states that the expected distortion is lower bounded by conditional entropy, the authors characterize the rate-distortion region of both the CEO problem [Theorem 3] CW14 and the two-encoder distributed source coding problem [Theorem 6] CW14 . Their results, which are valid when there is side information at the decoder, i.e., @math , require no specific assumptions on the sources, other than that they have finite alphabets.
{ "cite_N": [ "@cite_20" ], "mid": [ "2173267959" ], "abstract": [ "We consider the classical two-encoder multiterminal source coding problem where distortion is measured under logarithmic loss. We provide a single-letter description of the achievable rate distortion region for all discrete memoryless sources with finite alphabets. By doing so, we also give the rate distortion region for the m-encoder CEO problem (also under logarithmic loss). Several applications and examples are given." ] }
1811.03933
2900026719
We study the vector Gaussian CEO problem under logarithmic loss distortion measure. Specifically, @math agents observe independent noisy versions of a remote vector Gaussian source, and communicate independently with a decoder over rate-constrained noise-free links. The CEO also has its own Gaussian noisy observation of the source and wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. We find an explicit characterization of the rate-distortion region of this model. For the proof of this result, we first extend Courtade-Weissman's result on the rate-distortion region of the DM @math -encoder CEO problem to the case in which the CEO has access to a correlated side information stream which is such that the agents' observations are independent conditionally given the side information and remote source. Next, we obtain an outer bound on the region of the vector Gaussian CEO problem by evaluating the outer bound of the DM model by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. The approach is similar to Ekrem-Ulukus outer bounding technique for the vector Gaussian CEO problem under quadratic distortion measure, for which it was there found generally non-tight; but it is shown here to yield a complete characterization of the region for the case of logarithmic loss measure. Also, we show that Gaussian test channels with time-sharing exhaust the Berger-Tung inner bound, which is optimal. Furthermore, application of our results allows us to find the complete solutions of three related problems: the vector Gaussian distributed hypothesis testing against conditional independence problem, a quadratic vector Gaussian CEO problem with determinant constraint, and the vector Gaussian distributed Information Bottleneck problem.
In this paper, we study the problem of two-encoder multiterminal source coding with side information of Figure in the case of logarithmic loss distortion measure. That is, our model generalizes that of @cite_20 to the setting in which the decoder observes side information sequence that is statistically dependent on the sources to be compressed. We develop a single-letter characterization of the rate-distortion region of this model in the discrete memoryless case. In doing so, we show that a slight generalization of the Gastpar inner bound [Theorem 2] G04 for general distortion measures that accounts for time-sharing is optimal. The proof of the converse follows that of [Theorem 12] CW14 , and extends it to the case of correlated side information at decoder. It also involves a redefinition of the required auxiliary random variables appropriately. Furthermore, specializing the results to the case in which only Encoder 2 communicates with the decoder, i.e., @math , and the decoder is interested in reproducing 'soft' estimates of only the source @math , we characterize the trade-off among the complexity and accuracy of the system, i.e., a generalization of the so-called Information Bottleneck Method @cite_50 @cite_23 to the setting with side information at decoder.
{ "cite_N": [ "@cite_50", "@cite_23", "@cite_20" ], "mid": [ "1686946872", "2070945723", "2173267959" ], "abstract": [ "We define the relevant information in a signal @math as being the information that this signal provides about another signal @math . Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal @math requires more than just predicting @math , it also requires specifying which features of @math play a role in the prediction. We formalize this problem as that of finding a short code for @math that preserves the maximum information about @math . That is, we squeeze the information that @math provides about @math through a bottleneck' formed by a limited set of codewords @math . This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure @math emerges from the joint statistics of @math and @math . This approach yields an exact set of self consistent equations for the coding rules @math and @math . Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere.", "It is well-known that the information bottleneck method and rate distortion theory are related. Here it is described how the information bottleneck can be considered as rate distortion theory for a family of probability measures where information divergence is used as distortion measure. It is shown that the information bottleneck method has some properties that are not shared with rate distortion theory based on any other divergence measure. In this sense the information bottleneck method is unique.", "We consider the classical two-encoder multiterminal source coding problem where distortion is measured under logarithmic loss. We provide a single-letter description of the achievable rate distortion region for all discrete memoryless sources with finite alphabets. By doing so, we also give the rate distortion region for the m-encoder CEO problem (also under logarithmic loss). Several applications and examples are given." ] }
1811.03519
2899901795
End-to-end approaches have recently become popular as a means of simplifying the training and deployment of speech recognition systems. However, they often require large amounts of data to perform well on large vocabulary tasks. With the aim of making end-to-end approaches usable by a broader range of researchers, we explore the potential to use end-to-end methods in small vocabulary contexts where smaller datasets may be used. A significant drawback of small-vocabulary systems is the difficulty of expanding the vocabulary beyond the original training samples -- therefore we also study strategies to extend the vocabulary with only few examples per new class (few-shot learning). Our results show that an attention-based encoder-decoder can be competitive against a strong baseline on a small vocabulary keyword classification task, reaching 97.5 of accuracy on Tensorflow's Speech Commands dataset. It also shows promising results on the few-shot learning problem where a simple strategy achieved 68.8 of accuracy on new keywords with only 10 examples for each new class. This score goes up to 88.4 with a larger set of 100 examples.
E2E training has attracted much attention recently. One of the first breakthroughs came from the connectionist temporal classification (CTC) loss @cite_4 , which allows an acoustic neural model to be trained directly on unsegmented data. While the original technique is not E2E, it has later been extended to train models that predict grapheme sequences @cite_14 or in conjunction with a language model (LM) based on recurrent neural networks (RNNs), an architecture refered to as the RNN-transducer @cite_15 . More recently, the attention-based encoder-decoder model has been applied to automatic speech recognition (ASR) (see e.g. @cite_19 @cite_0 ).
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_0", "@cite_19", "@cite_15" ], "mid": [ "2102113734", "2127141656", "", "1586532344", "1828163288" ], "abstract": [ "This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network architecture and the Connectionist Temporal Classification objective function. A modification to the objective function is introduced that trains the network to minimise the expectation of an arbitrary transcription loss function. This allows a direct optimisation of the word error rate, even in the absence of a lexicon or language model. The system achieves a word error rate of 27.3 on the Wall Street Journal corpus with no prior linguistic information, 21.9 with only a lexicon of allowed words, and 8.2 with a trigram language model. Combining the network with a baseline system further reduces the error rate to 6.7 .", "Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.", "", "We replace the Hidden Markov Model (HMM) which is traditionally used in in continuous speech recognition with a bi-directional recurrent neural network encoder coupled to a recurrent neural network decoder that directly emits a stream of phonemes. The alignment between the input and output sequences is established using an attention mechanism: the decoder emits each symbol based on a context created with a subset of input symbols elected by the attention mechanism. We report initial results demonstrating that this new approach achieves phoneme error rates that are comparable to the state-of-the-art HMM-based decoders, on the TIMIT dataset.", "Many machine learning tasks can be expressed as the transformation---or ---of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for phoneme recognition are provided on the TIMIT speech corpus." ] }
1811.03519
2899901795
End-to-end approaches have recently become popular as a means of simplifying the training and deployment of speech recognition systems. However, they often require large amounts of data to perform well on large vocabulary tasks. With the aim of making end-to-end approaches usable by a broader range of researchers, we explore the potential to use end-to-end methods in small vocabulary contexts where smaller datasets may be used. A significant drawback of small-vocabulary systems is the difficulty of expanding the vocabulary beyond the original training samples -- therefore we also study strategies to extend the vocabulary with only few examples per new class (few-shot learning). Our results show that an attention-based encoder-decoder can be competitive against a strong baseline on a small vocabulary keyword classification task, reaching 97.5 of accuracy on Tensorflow's Speech Commands dataset. It also shows promising results on the few-shot learning problem where a simple strategy achieved 68.8 of accuracy on new keywords with only 10 examples for each new class. This score goes up to 88.4 with a larger set of 100 examples.
If the simplicity of the training procedure of E2E systems is attractive, they generally show reduced performance over traditional HMM-based systems, especially so when used without an external LM, a good example being @cite_11 . Using a much bigger dataset @cite_16 managed to reach competitive results on a dictation task, but was still performing worse on voice-search data. This doesn't mean though that E2E models will necessarily be bad in lower resource conditions. For example, @cite_17 achieved competitive results on several languages, even though it failed to surpass a DNN-HMM baseline. To the best of our knowledge, E2E models have never been applied to small vocabulary speech recognition tasks before. The work closest to ours is probably @cite_8 where an attention-based E2E architecture is applied to keyword spotting. Though, despite the vocabulary being reduced to one word, a very large dataset is used.
{ "cite_N": [ "@cite_8", "@cite_16", "@cite_17", "@cite_11" ], "mid": [ "2795183504", "2750499125", "2697044473", "2327501763" ], "abstract": [ "In this paper, we propose an attention-based end-to-end neural approach for small-footprint keyword spotting (KWS), which aims to simplify the pipelines of building a production-quality KWS system. Our model consists of an encoder and an attention mechanism. The encoder transforms the input signal into a high level representation using RNNs. Then the attention mechanism weights the encoder features and generates a fixed-length vector. Finally, by linear transformation and softmax function, the vector becomes a score used for keyword detection. We also evaluate the performance of different encoder architectures, including LSTM, GRU and CRNN. Experiments on real-world wake-up data show that our approach outperforms the recent Deep KWS approach by a large margin and the best performance is achieved by CRNN. To be more specific, with 84K parameters, our attention-based model achieves 1.02 false rejection rate (FRR) at 1.0 false alarm (FA) per hour.", "", "In recent years, so-called, “end-to-end” speech recognition systems have emerged as viable alternatives to traditional ASR frameworks. Keyword search, localizing an orthographic query in a speech corpus, is typically performed by using automatic speech recognition (ASR) to generate an index. Previous work has evaluated the use of end-to-end systems for ASR on well known corpora (WSJ, Switchboard, TIMIT, etc.) in high-resource languages like English and Mandarin. In this work, we investigate the use of Connectionist Temporal Classification (CTC) networks, recurrent encoder-decoders with attention, two end-to-end ASR systems for keyword search and speech recognition on low resource languages. We find end-to-end systems can generate high quality 1-best transcripts on low-resource languages, but, because they generate very sharp posteriors, their utility is limited for KWS. We explore a number of ways to address this limitation with modest success. Experimental results reported are based on the IARPA BABEL OP3 languages and evaluation framework. This paper represents the first results using “end-to-end” techniques for speech recognition and keyword search on low-resource languages.", "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1 without a dictionary or an external language model and 10.3 with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0 on the same set." ] }
1811.03529
2900192228
A novel cognition-inspired, agnostic framework is proposed for building maps in mobile robotics that are efficient in terms of image matching retrieval for solving Visual Place Recognition (VPR) problem. A dataset, 'ESSEX3IN1', is also presented to demonstrate the significantly enhanced performance of state-of-the-art VPR techniques when combined with the proposed framework.
Traditionally places have been described by camera frames, where a place frame is selected from multiple Video frames based on either time-step, distance or distinctiveness. Most of the VPR datasets @cite_19 @cite_25 @cite_3 @cite_2 @cite_20 @cite_1 @cite_10 @cite_11 are time-based, as frames are selected given a fixed FPS (frames per second) rate of a video camera. However, time-based place selection assumes a constant non-zero speed of the robotic platform and is thus not practical. To cater for variable speed, distance-based' frame selection is used where a frame is picked every few meters to represent a new Place 11 @cite_7 . But, both time and distance based approaches lead to huge database sizes and frequently sample visually identical frames as different places. Thus, leading to inaccuracies and impracticality for long-term autonomy.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_3", "@cite_19", "@cite_2", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "", "2115579991", "", "1599187539", "2110405746", "2744874208", "2109197213", "", "2789510920" ], "abstract": [ "", "We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.", "", "This paper presents the development of a low-cost sensor platform for use in ground-based visual pose estimation and scene mapping tasks. We seek to develop a technical solution using low-cost vision hardware that allows us to accurately estimate robot position for SLAM tasks. We present results from the application of a vision based pose estimation technique to simultaneously determine camera poses and scene structure. The results are generated from a dataset gathered traversing a local road at the St Lucia Campus of the University of Queensland. We show the accuracy of the pose estimation over a 1.6km trajectory in relation to GPS ground truth.", "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 .", "Recently, image representations derived from Convolutional Neural Networks (CNNs) have been demonstrated to achieve impressive performance on a wide variety of tasks, including place recognition. In this paper, we take a step deeper into the internal structure of CNNs and propose novel CNN-based image features for place recognition by identifying salient regions and creating their regional representations directly from the convolutional layer activations. A range of experiments is conducted on challenging datasets with varied conditions and viewpoints. These reveal superior precision-recall characteristics and robustness against both viewpoint and appearance variations for the proposed approach over the state of the art. By analyzing the feature encoding process of our approach, we provide insights into what makes an image presentation robust against external variations.", "Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.", "", "Localization is an integral part of reliable robot navigation, and long-term autonomy requires robustness against perceptional changes in the environment during localization. In the context of vision-based localization, such changes can be caused by illumination variations, occlusion, structural development, different weather conditions, and seasons. In this paper, we present a novel approach for localizing a robot over longer periods of time using only monocular image data. We propose a novel data association approach for matching streams of incoming images to an image sequence stored in a database. Our method exploits network flows to leverage sequential information to improve the localization performance and to maintain several possible trajectories hypotheses in parallel. To compare images, we consider a semidense image description based on histogram of oriented gradients features as well as global descriptors from deep convolutional neural networks trained on ImageNet for robust localization. We perform extensive evaluations on a variety of datasets and show that our approach outperforms existing state-of-the-art approaches." ] }
1811.03529
2900192228
A novel cognition-inspired, agnostic framework is proposed for building maps in mobile robotics that are efficient in terms of image matching retrieval for solving Visual Place Recognition (VPR) problem. A dataset, 'ESSEX3IN1', is also presented to demonstrate the significantly enhanced performance of state-of-the-art VPR techniques when combined with the proposed framework.
Different research works have tried overcoming these intrinsic limitations of frame-cum-Place sampling by introducing Visual-distinctiveness based selection. Authors in @cite_6 use a custom-designed algorithm that detects change-point for segmentation between different topological places in both indoor and outdoor scenes. Image-sequence Partitioning for creating sparse topological maps is presented in @cite_13 , where sequences of images are partitioned into nodes places using four descriptors namely GIST, Optical Flow, Local Feature Mapping and Common-Important Words. In @cite_24 , a thematic approach is adapted to evaluate the novelty of an incoming image by co-relating it with the redundancy of visual features topics. Bayesian Surprise is adapted with immunity to sensor type, for extracting landmarks to create a sparse topological map in @cite_8 . Online Topic Modelling with visual surprise calculation is done in @cite_12 for under-water robotic explorations. An incremental unsupervised Place discovery scheme is adopted in @cite_9 which fuses information over time to find visually distinct places.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_6", "@cite_24", "@cite_13", "@cite_12" ], "mid": [ "2153727510", "2038034534", "2087079509", "2151971674", "2029613090", "2084317025" ], "abstract": [ "Automatic detection of landmarks, usually special places in the environment such as gateways, for topological mapping has proven to be a difficult task. We present the use of Bayesian surprise, introduced in computer vision, for landmark detection. Further, we provide a novel hierarchical, graphical model for the appearance of a place and use this model to perform surprise-based landmark detection. Our scheme is agnostic to the sensor type, and we demonstrate this by implementing a simple laser model for computing surprise. We evaluate our landmark detector using appearance and laser measurements in the context of a topological mapping algorithm, thus demonstrating the practical applicability of the detector.", "This paper describes an online place discovery and recognition engine that fuses information over time to create topologically distinct places. A key motivation is the recognition that a single image may be a poor exemplar of what constitutes a place. Images are not ‘places’ nor are they ‘documents’. Instead, by treating image-sequences as a multimodal distribution over topics – and by discovering topics incrementally and online – it is possible to both reduce the memory footprint of place recognition systems, and to improve precision and recall. Distinctive key-places are represented by a cluster topics found from the covisibility graph of a relative simultaneous localization and mapping engine – key-places inherently span many images. A dynamic vocabulary of visual words and density based clustering is used to continually estimate a set of visual topics, changes in which drive the placerecognition process. The system is evaluated using an indoor robot sequence, a standard outdoor robot sequence and a longterm sequence from a static camera. Experiments demonstrate qualitatively distinct themes associated with discovered places – from common place types such as ‘hallway’, or ‘desk-area’, to temporal concepts such as ‘dusk’, ‘dawn’ or ‘mid-day’. Compared to traditional image-based place-recognition, this reduces the information that must be stored without reducing place-recognition performance.", "Topological navigation consists for a robot in navigating in a topological graph which nodes are topological places. Either for indoor or outdoor environments, segmentation into topological places is a challenging issue. In this paper, we propose a common approach for indoor and outdoor environment segmentation without elaborating a complete topological navigation system. The approach is novel in that environment sensing is performed using spherical images. Environment structure estimation is performed by a global structure descriptor specially adapted to the spherical representation. This descriptor is processed by a custom designed algorithm which detects change-points defining the segmentation between topological places.", "This paper is a demonstration of how a robot can, through introspection and then targeted data retrieval, improve its own performance. It is a step in the direction of lifelong learning and adaptation and is motivated by the desire to build robots that have plastic competencies which are not baked in. They should react to and benefit from use. We consider a particular instantiation of this problem in the context of place recognition. Based on a topic based probabilistic model of images, we use a measure of perplexity to evaluate how well a working set of background images explain the robot's online view of the world. Offline, the robot then searches an external resource to seek out additional background images that bolster its ability to localise in its environment when used next. In this way the robot adapts and improves performance through use.", "Most of the existing appearance based topological mapping algorithms produce dense topological maps in which each image stands as a node in the topological graph. Sparser maps can be built by representing groups of visually similar images as nodes of a topological graph. In this paper, we present a sparse topological mapping framework which uses Image Sequence Partitioning (ISP) techniques to group visually similar images as topological graph nodes. We present four different ISP techniques and evaluate their performance. In order to take advantage of the afore mentioned maps, we make use of Hierarchical Inverted Files (HIF) which enable efficient hierarchical loop closure. Outdoor experimental results demonstrating the sparsity, efficiency and accuracy achieved by the combination of ISP and HIF in performing loop closure are presented.", "Given an image stream, our on-line algorithm will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure using a topic based image representation. A coreset for an image stream is a set of representative images that semantically compresses the data corpus, in the sense that every frame has a similar representative image in the coreset. We prove that our algorithm e ciently computes the smallest possible coreset under natural well-defined similarity metric and up to provably small approximation factor. The output visual summary is computed via a hierarchical tree of coresets for di↵erent parts of the image stream. This allows multi-resolution summarization (or a video summary of specified duration) in the batch setting and a memory-e cient incremental summary for the streaming case." ] }
1811.03555
2900127069
We present a novel modular architecture for StarCraft II AI. The architecture splits responsibilities between multiple modules that each control one aspect of the game, such as build-order selection or tactics. A centralized scheduler reviews macros suggested by all modules and decides their order of execution. An updater keeps track of environment changes and instantiates macros into series of executable actions. Modules in this framework can be optimized independently or jointly via human design, planning, or reinforcement learning. We apply deep reinforcement learning techniques to training two out of six modules of a modular agent with self-play, achieving 94 or 87 win rates against the "Harder" (level 5) built-in Blizzard bot in Zerg vs. Zerg matches, with or without fog-of-war.
Recently, vinyals_2017_starcraft have released PySC2, a python interface for StarCraft II AI, and evaluated state-of-the-art deep RL methods. Their end-to-end training approach, although shows potential for integrating deep RL to RTS games, cannot beat the easiest built-in AI. Other efforts of applying deep learning or deep RL to StarCraft (I II) include controlling multiple units in micromanagement scenarios @cite_6 @cite_2 @cite_17 @cite_14 and learning build orders from human replays @cite_5 . To our knowledge, no published deep RL approach has succeeded in playing the full game yet.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2963890729", "2756196406", "2617547828", "2734594771", "2518713116" ], "abstract": [ "Real-time strategy games have been an important field of game artificial intelligence in recent years. This paper presents a reinforcement learning and curriculum transfer learning method to control multiple units in StarCraft micromanagement. We define an efficient state representation, which breaks down the complexity caused by the large state space in the game environment. Then, a parameter sharing multi-agent gradient-descent Sarsa( @math ) algorithm is proposed to train the units. The learning policy is shared among our units to encourage cooperative behaviors. We use a neural network as a function approximator to estimate the action–value function, and propose a reward function to help units balance their move and attack. In addition, a transfer learning method is used to extend our model to more difficult scenarios, which accelerates the training process and improves the learning performance. In small-scale scenarios, our units successfully learn to combat and defeat the built-in AI with 100 win rates. In large-scale scenarios, the curriculum transfer learning method is used to progressively train a group of units, and it shows superior performance over some baseline methods in target scenarios. With reinforcement learning and curriculum transfer learning, our units are able to learn appropriate strategies in StarCraft micromanagement scenarios.", "Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.", "Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.", "The real-time strategy game StarCraft has proven to be a challenging environment for artificial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6 and 22.9 in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can significantly outperform the game's built-in Terran bot, and play competitively against UAlbertaBot with a fixed rush strategy. To our knowledge, this is the first time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the network's performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.", "We consider scenarios from the real-time strategy game StarCraft as new benchmarks for reinforcement learning algorithms. We propose micromanagement tasks, which present the problem of the short-term, low-level control of army members during a battle. From a reinforcement learning point of view, these scenarios are challenging because the state-action space is very large, and because there is no obvious feature representation for the state-action evaluation function. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. In addition, we present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, -greedy exploration. Experiments show that with this algorithm, we successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle." ] }
1811.03555
2900127069
We present a novel modular architecture for StarCraft II AI. The architecture splits responsibilities between multiple modules that each control one aspect of the game, such as build-order selection or tactics. A centralized scheduler reviews macros suggested by all modules and decides their order of execution. An updater keeps track of environment changes and instantiates macros into series of executable actions. Modules in this framework can be optimized independently or jointly via human design, planning, or reinforcement learning. We apply deep reinforcement learning techniques to training two out of six modules of a modular agent with self-play, achieving 94 or 87 win rates against the "Harder" (level 5) built-in Blizzard bot in Zerg vs. Zerg matches, with or without fog-of-war.
Self-play is a powerful technique to bootstrap from an initially random agent, without access to external data or existing agents. The combination of deep learning, planning, and self-play led to the well-known Go-playing agents AlphaGo @cite_9 and AlphaZero @cite_0 . More recently, bansal_2018_emergent has extended self-play to asymmetric environments and learns complex behavior of simulated robots.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2766447205", "2257979135" ], "abstract": [ "Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.", "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away." ] }
1811.03729
2900261076
To sustain engaging conversation, it is critical for chatbots to make good use of relevant knowledge. Equipped with a knowledge base, chatbots are able to extract conversation-related attributes and entities to facilitate context modeling and response generation. In this work, we distinguish the uses of attribute and entity and incorporate them into the encoder-decoder architecture in different manners. Based on the augmented architecture, our chatbot, namely Mike, is able to generate responses by referring to proper entities from the collected knowledge. To validate the proposed approach, we build a movie conversation corpus on which the proposed approach significantly outperforms other four knowledge-grounded models.
Due to the massive data and the development of neural networks, researchers have tried to build up chit-chat conversational systems using data-driven neural networks. Given a user utterance, the conversational systems are expected to return a proper response by either using retrieval techniques or generation techniques. To date, generation-based approaches have shown their effectivenesses. The pioneer work is @cite_7 that first formulates the response generation problem as Statistical Machine Translation (SMT), and reveals the feasibility of using massive Twitter data to build up a generation-based conversational model.
{ "cite_N": [ "@cite_7" ], "mid": [ "10957333" ], "abstract": [ "We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response." ] }
1811.03729
2900261076
To sustain engaging conversation, it is critical for chatbots to make good use of relevant knowledge. Equipped with a knowledge base, chatbots are able to extract conversation-related attributes and entities to facilitate context modeling and response generation. In this work, we distinguish the uses of attribute and entity and incorporate them into the encoder-decoder architecture in different manners. Based on the augmented architecture, our chatbot, namely Mike, is able to generate responses by referring to proper entities from the collected knowledge. To validate the proposed approach, we build a movie conversation corpus on which the proposed approach significantly outperforms other four knowledge-grounded models.
From then, the majority of generation-based models apply the encoder-decoder architecture @cite_15 which allows flexible modeling of user utterance and history utterances @cite_0 @cite_1 @cite_20 . Since history utterances often provide abundant information for conversation modeling, researchers have proposed extensive context-aware conversation models. The simplest way is to combine history utterances with the current one as the whole input using concatenation @cite_2 @cite_0 @cite_16 @cite_3 , pooling @cite_0 , or weighted combination @cite_20 . More complicated way is to adopt hierarchical encoders by treating conversations as two-level sequences @cite_1 which were extended with high-level latent variables to capture diversity in the conversation @cite_48 @cite_14 @cite_18 @cite_40 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_48", "@cite_1", "@cite_3", "@cite_0", "@cite_40", "@cite_2", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "2611714756", "2418993857", "2399880602", "889023230", "2963544536", "2951580200", "2605246398", "836999996", "2949888546", "2339852062", "2741363662" ], "abstract": [ "Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.", "We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.", "Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.", "We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.", "", "We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.", "While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.", "This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "To establish an automatic conversation system between humans and computers is regarded as one of the most hardcore problems in computer science, which involves interdisciplinary techniques in information retrieval, natural language processing, artificial intelligence, etc. The challenges lie in how to respond so as to maintain a relevant and continuous conversation with humans. Along with the prosperity of Web 2.0, we are now able to collect extremely massive conversational data, which are publicly available. It casts a great opportunity to launch automatic conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will be able to find at least some responses from the massive repository for any user inputs. Given a human issued message, i.e., query, our system would provide a reply after adequate training and learning of how to respond. In this paper, we propose a retrieval-based conversation system with the deep learning-to-respond schema through a deep neural network framework driven by web data. The proposed model is general and unified for different conversation scenarios in open domain. We incorporate the impact of multiple data inputs, and formulate various features and factors with optimization into the deep learning framework. In the experiments, we investigate the effectiveness of the proposed deep neural network structures with better combinations of all different evidence. We demonstrate significant performance improvement against a series of standard and state-of-art baselines in terms of p@1, MAP, nDCG, and MRR for conversational purposes.", "" ] }
1811.03729
2900261076
To sustain engaging conversation, it is critical for chatbots to make good use of relevant knowledge. Equipped with a knowledge base, chatbots are able to extract conversation-related attributes and entities to facilitate context modeling and response generation. In this work, we distinguish the uses of attribute and entity and incorporate them into the encoder-decoder architecture in different manners. Based on the augmented architecture, our chatbot, namely Mike, is able to generate responses by referring to proper entities from the collected knowledge. To validate the proposed approach, we build a movie conversation corpus on which the proposed approach significantly outperforms other four knowledge-grounded models.
Given a user input utterance, there often exists several proper kinds of responses. This is called the one-to-many'' problem in dialog response generation, and has been discussed in @cite_47 @cite_40 . The diversity is resulted from a variety of influential factors.
{ "cite_N": [ "@cite_40", "@cite_47" ], "mid": [ "2605246398", "1958706068" ], "abstract": [ "While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.", "Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., \"I don't know\") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations." ] }
1811.03729
2900261076
To sustain engaging conversation, it is critical for chatbots to make good use of relevant knowledge. Equipped with a knowledge base, chatbots are able to extract conversation-related attributes and entities to facilitate context modeling and response generation. In this work, we distinguish the uses of attribute and entity and incorporate them into the encoder-decoder architecture in different manners. Based on the augmented architecture, our chatbot, namely Mike, is able to generate responses by referring to proper entities from the collected knowledge. To validate the proposed approach, we build a movie conversation corpus on which the proposed approach significantly outperforms other four knowledge-grounded models.
However, the symbolic nature of KGs impedes their applications. To tackle this issue, knowledge graph embedding models have been proposed to embed the relations and entities in a KG into low-dimensional continuous vector spaces. These KG embedding models can be roughly categorized into two groups: translation-based models and semantic matching models. Specifically, translation-based models learn the embeddings by calculating the plausibility of a fact as the distance between the two entities, usually after a translation carried out by the relation. Representative models are TransE @cite_5 , TransH @cite_35 , TransR @cite_45 . In TransE @cite_5 , the entity and relation embedding vectors are in the same space. In TransH @cite_35 , entity embedding vectors are projected into a relation-specific hyperplane. In TransR @cite_45 , entities are projected from the entity space to the relation space. kgsurvey summarizes other advanced knowledge embedding approaches. In this work, we embed our KG using the widely-adopted TransE model @cite_5 , and integrate the knowledge embeddings into conversation models in a novel way.
{ "cite_N": [ "@cite_5", "@cite_45", "@cite_35" ], "mid": [ "2127795553", "", "2283196293" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up." ] }
1906.09707
2950956440
Crowd counting has been widely studied by computer vision community in recent years. Due to the large scale variation, it remains to be a challenging task. Previous methods adopt either multi-column CNN or single-column CNN with multiple branches to deal with this problem. However, restricted by the number of columns or branches, these methods can only capture a few different scales and have limited capability. In this paper, we propose a simple but effective network called DSNet for crowd counting, which can be easily trained in an end-to-end fashion. The key component of our network is the dense dilated convolution block, in which each dilation layer is densely connected with the others to preserve information from continuously varied scales. The dilation rates in dilation layers are carefully selected to prevent the block from gridding artifacts. To further enlarge the range of scales covered by the network, we cascade three blocks and link them with dense residual connections. We also introduce a novel multi-scale density level consistency loss for performance improvement. To evaluate our method, we compare it with state-of-the-art algorithms on four crowd counting datasets (ShanghaiTech, UCF-QNRF, UCF_CC_50 and UCSD). Experimental results demonstrate that DSNet can achieve the best performance and make significant improvements on all the four datasets (30 on the UCF-QNRF and UCF_CC_50, and 20 on the others).
Most of the early traditional works focus on detection-based methods using body or part-based detector to locate people in the crowd image and count their number. However, severe occlusions of highly congested scenes limit the performance of these methods. To overcome the problem, regression-based methods are deployed to learn a mapping from the extracted feature to the number of objects directly. Following similar approaches, Idrees @cite_20 proposed a method that extracts features via Fourier analysis and SIFT @cite_26 interest points based counting in local patches. Due to the overlooked saliency that causes inaccurate results in local regions, Lempitsky @cite_15 proposed a method that learns a linear mapping between features and its object density maps in the local region. Futhermore, considering the difficulty of learning an ideal linear mapping, Pham @cite_33 used random forest regression to learn a non-linear mapping instead of the linear one.
{ "cite_N": [ "@cite_15", "@cite_26", "@cite_33", "@cite_20" ], "mid": [ "2145983039", "2124386111", "2207893099", "2072232009" ], "abstract": [ "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.", "This paper presents a patch-based approach for crowd density estimation in public scenes. We formulate the problem of estimating density in a structured learning framework applied to random decision forests. Our approach learns the mapping between patch features and relative locations of all objects inside each patch, which contribute to generate the patch density map through Gaussian kernel density estimation. We build the forest in a coarse-to-fine manner with two split node layers, and further propose a crowdedness prior and an effective forest reduction method to improve the estimation accuracy and speed. Moreover, we introduce a semi-automatic training method to learn the estimator for a specific scene. We achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.", "We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance." ] }
1906.09707
2950956440
Crowd counting has been widely studied by computer vision community in recent years. Due to the large scale variation, it remains to be a challenging task. Previous methods adopt either multi-column CNN or single-column CNN with multiple branches to deal with this problem. However, restricted by the number of columns or branches, these methods can only capture a few different scales and have limited capability. In this paper, we propose a simple but effective network called DSNet for crowd counting, which can be easily trained in an end-to-end fashion. The key component of our network is the dense dilated convolution block, in which each dilation layer is densely connected with the others to preserve information from continuously varied scales. The dilation rates in dilation layers are carefully selected to prevent the block from gridding artifacts. To further enlarge the range of scales covered by the network, we cascade three blocks and link them with dense residual connections. We also introduce a novel multi-scale density level consistency loss for performance improvement. To evaluate our method, we compare it with state-of-the-art algorithms on four crowd counting datasets (ShanghaiTech, UCF-QNRF, UCF_CC_50 and UCSD). Experimental results demonstrate that DSNet can achieve the best performance and make significant improvements on all the four datasets (30 on the UCF-QNRF and UCF_CC_50, and 20 on the others).
Due to the success of CNN-based methods in classification and recognition tasks, the CNN-based methods are employed for the purpose of crowd counting and density estimation. Walach @cite_21 made use of layered boosting and selective sampling methods to reduce the estimation error. Instead of using patch-based training, Shang @cite_27 proposed an estimation method using CNNs which takes the whole image as input and directly outputs the final crowd count. Boominathan @cite_16 presented the first work purely using convolutional network and dual-column architecture to tackle the issue of scale variation for generating density map. Zhang @cite_0 introduced a multi-column architecture to extract features at different scales. Similarly, Onoro @cite_30 proposed a scale-aware, multi-column counting model called Hydra CNN for object density estimation. Recently, inspired by MCNN @cite_0 , Sam @cite_11 proposed a Switching-CNN that adaptively select the most optimal regressor among several independent regressors for a particular patch. Sindagi @cite_23 explored a new architecture where MCNN @cite_0 is enriched with two additional columns capturing global and local context.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_0", "@cite_27", "@cite_23", "@cite_16", "@cite_11" ], "mid": [ "2519281173", "2520826941", "2463631526", "2514654788", "", "2517615595", "2741077351" ], "abstract": [ "In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.", "In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .", "This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.", "Crowd counting is a very challenging task in crowded scenes due to heavy occlusions, appearance variations and perspective distortions. Current crowd counting methods typically operate on an image patch level with overlaps, then sum over the patches to get the final count. In this paper, we propose an end-to-end convolutional neural network (CNN) architecture that takes a whole image as its input and directly outputs the counting result. While making use of sharing computations over overlapping regions, our method takes advantages of contextual information when predicting both local and global count. In particular, we first feed the image to a pre-trained CNN to get a set of high level features. Then the features are mapped to local counting numbers using recurrent network layers with memory cells. We perform the experiments on several challenging crowd counting datasets, which achieve the state-of-the-art results and demonstrate the effectiveness of our method.", "", "Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (", "We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd." ] }
1906.09707
2950956440
Crowd counting has been widely studied by computer vision community in recent years. Due to the large scale variation, it remains to be a challenging task. Previous methods adopt either multi-column CNN or single-column CNN with multiple branches to deal with this problem. However, restricted by the number of columns or branches, these methods can only capture a few different scales and have limited capability. In this paper, we propose a simple but effective network called DSNet for crowd counting, which can be easily trained in an end-to-end fashion. The key component of our network is the dense dilated convolution block, in which each dilation layer is densely connected with the others to preserve information from continuously varied scales. The dilation rates in dilation layers are carefully selected to prevent the block from gridding artifacts. To further enlarge the range of scales covered by the network, we cascade three blocks and link them with dense residual connections. We also introduce a novel multi-scale density level consistency loss for performance improvement. To evaluate our method, we compare it with state-of-the-art algorithms on four crowd counting datasets (ShanghaiTech, UCF-QNRF, UCF_CC_50 and UCSD). Experimental results demonstrate that DSNet can achieve the best performance and make significant improvements on all the four datasets (30 on the UCF-QNRF and UCF_CC_50, and 20 on the others).
Although these multi-column architectures prove the ability to estimate crowd count, several disadvantages also exist in these approaches: they are hard to train caused by the multi-column architecture, and they have large amount of redundant parameters, also the speed is slow as multiple CNNs need to be run. Taking all above drawback into consideration, recent works have focused on multi-scale, single column architectures. Zhang @cite_25 proposed a scale-adaptive CNN that combines adapted feature maps extracted from multiple layers to produce the final density map. Li @cite_18 proposed a network for congested scene called CSRNet, which uses dilated kernels to deliver larger reception fields and replace pooling operations. Cao @cite_3 presented scale aggregation network that improves the multi-scale representation and generates high-resolution density maps. However, all these single-column works can only capture several kinds of receptive fields, which limits the network to handle large variations in crowd images.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_3" ], "mid": [ "2964209782", "", "2895051362" ], "abstract": [ "We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure. We demonstrate CSRNet on four datasets (ShanghaiTech dataset, the UCF_CC_50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset) and we deliver the state-of-the-art performance. In the ShanghaiTech Part_B dataset, CSRNet achieves 47.3 lower Mean Absolute Error (MAE) than the previous state-of-the-art method. We extend the targeted applications for counting other objects, such as the vehicle in TRANCOS dataset. Results show that CSRNet significantly improves the output quality with 15.4 lower MAE than the previous state-of-the-art approach.", "", "In this paper, we propose a novel encoder-decoder network, called Scale Aggregation Network (SANet), for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters." ] }
1906.09784
2949430686
Conservative Policy Iteration (CPI) is a founding algorithm of Approximate Dynamic Programming (ADP). Its core principle is to stabilize greediness through stochastic mixtures of consecutive policies. It comes with strong theoretical guarantees, and inspired approaches in deep Reinforcement Learning (RL). However, CPI itself has rarely been implemented, never with neural networks, and only experimented on toy problems. In this paper, we show how CPI can be practically combined with deep RL with discrete actions. We also introduce adaptive mixture rates inspired by the theory. We experiment thoroughly the resulting algorithm on the simple Cartpole problem, and validate the proposed method on a representative subset of Atari games. Overall, this work suggests that revisiting classic ADP may lead to improved and more stable deep RL algorithms.
The proposed approach is related to actor-critics in general, being itself an actor-critic. It is notably related to TRPO @cite_23 , that introduced a KL penalty on the greedy step as an alternative to the stochastic mixture of CPI. This is indeed very useful for continuous actions, but probably unnecessary for discrete actions, the case considered here. Moreover, TRPO is an on-policy algorithm, while the proposed DCPI approach is off-policy. The principle of regularizing greediness in actor-critics is quite widespread, be it with a KL divergence constraint (TRPO), a clipping of policies ratio (PPO @cite_18 ), entropy regularization (SAC), or even following policy gradient, for example. The common point of these approaches is that they focus on continuous action spaces. In the discrete case, considering a stochastic mixture is quite natural, acknowledging that its extension to the continuous case is not straightforward.
{ "cite_N": [ "@cite_18", "@cite_23" ], "mid": [ "2736601468", "1771410628" ], "abstract": [ "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.", "In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters." ] }
1906.09607
2953298354
In recent years, neural architecture search (NAS) has dramatically advanced the development of neural network design. While most previous works are computationally intensive, differentiable NAS methods reduce the search cost by constructing a super network in a continuous space covering all possible architectures to search for. However, few of them can search for the network width (the number of filters channels) because it is intractable to integrate architectures with different widths into one super network following conventional differentiable NAS paradigm. In this paper, we propose a novel differentiable NAS method which can search for the width and the spatial resolution of each block simultaneously. We achieve this by constructing a densely connected search space and name our method as DenseNAS. Blocks with different width and spatial resolution combinations are densely connected to each other. The best path in the super network is selected by optimizing the transition probabilities between blocks. As a result, the overall depth distribution of the network is optimized globally in a graceful manner. In the experiments, DenseNAS obtains an architecture with 75.9 top-1 accuracy on ImageNet and the latency is as low as 24.3ms on a single TITAN-XP. The total search time is merely 23 hours on 4 GPUs.
NASNet @cite_11 is the first work that proposes the cell structure to construct the search space. They search for the operation types and the topological connection in the cell and repeat the cell to form the whole architecture. The depth of the architecture (i.e., the number of repetitions of the cell), the widths and the occurrences of down-sampling operations are all set by hand. Afterwards, many works @cite_12 @cite_15 @cite_25 @cite_38 adopt a similar cell-based search space. MnasNet @cite_29 uses a block-wise search space. ProxylessNAS @cite_7 , FBNet @cite_1 and ChamNet @cite_31 simplify the search space by searching mostly for the expansion ratios and kernel sizes of the mobile inverted bottleneck convolution (i.e. MBConv) @cite_16 layers. Auto-DeepLab @cite_5 creatively designs a two-level hierarchical search space for a segmentation network. The search space is also based on the cell structure and contains complicated operations on the spatial resolution. Our work is also fundamentally different from DenseNet @cite_36 . Even though the blocks in our super net are densely connected, only one path will be selected to derive the final architecture which contains no densely connected blocks, as shown in Fig. .
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_7", "@cite_36", "@cite_29", "@cite_1", "@cite_5", "@cite_15", "@cite_16", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "", "2905692112", "2964259004", "2963446712", "2963918968", "2785366763", "2910554758", "", "2783000019", "", "2963821229", "2964081807" ], "abstract": [ "", "This paper proposes an efficient neural network (NN) architecture design methodology called Chameleon that honors given resource constraints. Instead of developing new building blocks or using computationally-intensive reinforcement learning algorithms, our approach leverages existing efficient network building blocks and focuses on exploiting hardware traits and adapting computation resources to fit target latency and or energy constraints. We formulate platform-aware NN architecture search in an optimization framework and propose a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and or energy) predictors. At the core of our algorithm lies an accuracy predictor built atop Gaussian Process with Bayesian optimization for iterative sampling. With a one-time building cost for the predictors, our algorithm produces state-of-the-art model architectures on different platforms under given constraints in just minutes. Our results show that adapting computation resources to building blocks is critical to model performance. Without the addition of any bells and whistles, our models achieve significant accuracy improvements against state-of-the-art hand-crafted and automatically designed architectures. We achieve 73.8 and 75.3 top-1 accuracy on ImageNet at 20ms latency on a mobile CPU and DSP. At reduced latency, our models achieve up to 8.5 (4.8 ) and 6.6 (9.3 ) absolute top-1 accuracy improvements compared to MobileNetV2 and MnasNet, respectively, on a mobile CPU (DSP), and 2.7 (4.6 ) and 5.6 (2.6 ) accuracy gains over ResNet-101 and ResNet-152, respectively, on an Nvidia GPU (Intel CPU).", "", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining.", "", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters", "", "We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset." ] }
1906.09794
2949310572
In the index coding problem a sender holds a message @math and wishes to broadcast information to @math receivers in a way that enables the @math th receiver to retrieve the @math th bit @math . Every receiver has prior side information comprising a subset of the bits of @math , and the goal is to minimize the length of the information sent via the broadcast channel. Porter and Wootters have recently introduced the model of embedded index coding, where the receivers also play the role of the sender and the goal is to minimize the total length of their broadcast information. An embedded index code is said to be task-based if every receiver retrieves its bit based only on the information provided by one of the receivers. This short paper studies the effect of the task-based restriction on linear embedded index coding. It is shown that for certain side information maps there exists a linear embedded index code of length quadratically smaller than that of any task-based embedded index code. The result attains, up to a multiplicative constant, the largest possible gap between the two quantities. The proof is by an explicit construction and the analysis involves spectral techniques.
The index coding problem, introduced in @cite_0 and further developed in @cite_7 , has been studied in various variations and extensions. This research is motivated by applications such as distributed storage @cite_2 , wireless communication @cite_16 , and the more general problem of network coding @cite_8 . The variant called embedded index coding, introduced in @cite_17 , can be viewed as a special case of the multi-sender index coding model studied in @cite_4 which allows multiple senders and multiple receivers but as disjoint sets of vertices (see also @cite_14 ). The framework of index coding studied in @cite_17 is more general than the one considered in the current work and allows the receivers to request multiple messages.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_0", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2897837143", "1512160561", "2072475112", "2105831729", "2105406958", "2962985178", "1966060873", "2934713066" ], "abstract": [ "We consider a Multi-Sender Unicast Index-Coding (MSUIC) problem, where in a broadcast network, multiple senders collaboratively send distinct messages to multiple receivers, each having some subset of the messages a priori . The aim is to find the shortest index code that minimizes the total number of coded bits sent by the senders. In this paper, built on the classic single-sender minrank concept, we develop a new rank-minimization framework for MSUIC that explicitly takes into account the sender message constraints and minimizes the sum of the ranks of encoding matrices subject to the receiver decoding requirements. This framework provides a systematic way to construct multi-sender linear index codes and to study their capability in achieving the shortest index codelength per message length (i.e., the optimal broadcast rate). In particular, we establish the optimal broadcast rate for all critical MSUIC instances with up to four receivers and show that a binary linear index code is optimal for all, except 15 instances with four receivers. We also propose a heuristic algorithm (in lieu of exhaustive search) to solve the rank-minimization problem. The effectiveness of the algorithm is validated by numerical studies of MSUIC instances with four or more receivers.", "Index coding studies multiterminal source-coding problems where a set of receivers are required to decode multiple (possibly different) messages from a common broadcast, and they each know some messages a priori . In this paper, at the receiver end, we consider a special setting where each receiver knows only one message a priori , and each message is known to only one receiver. At the broadcasting end, we consider a generalized setting where there could be multiple senders, and each sender knows a subset of the messages. The senders collaborate to transmit an index code. This paper looks at minimizing the number of total coded bits the senders are required to transmit. When there is only one sender, we propose a pruning algorithm to find a lower bound on the optimal (i.e., the shortest) index codelength, and show that it is achievable by linear index codes. When there are two or more senders, we propose an appending technique to be used in conjunction with the pruning technique to give a lower bound on the optimal index codelength; we also derive an upper bound based on cyclic codes. While the two bounds do not match in general, for the special case where no two distinct senders know any message in common, the bounds match, giving the optimal index codelength. The results are expressed in terms of strongly connected components in directed graphs that represent the index-coding problems.", "Motivated by a problem of transmitting data over broadcast channels (Birk and Kol, INFOCOM 1998), we study the following coding problem: a sender communicates with n receivers R_1, . . . , R_n. He holds an input x 0, 1 ^n and wishes to broadcast a single message so that each receiver R_i can recover the bit x_i. Each R_i has prior side information about x, induced by a directed graph G on n nodes; R_i knows the bits of x in the positions j | (i, j) is an edge of G . We call encoding schemes that achieve this goal INDEX codes for 0, 1 ^n with side information graph G. In this paper we identify a measure on graphs, the minrank, which we conjecture to exactly characterize the minimum length of INDEX codes. We resolve the conjecture for certain natural classes of graphs. For arbitrary graphs, we show that the minrank bound is tight for both linear codes and certain classes of non-linear codes. For the general problem, we obtain a (weaker) lower bound that the length of an INDEX code for any graph G is at least the size of the maximum acyclic induced subgraph of G.", "We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.", "The Informed-Source Coding On Demand (ISCOD) approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented. The key idea underlying ISCOD is the joint exploitation of the data blocks already cached by each client, the server's full knowledge of client-cache contents and client requests, and the fact that each client only needs to be able to derive the blocks requested by it rather than all the blocks ever transmitted or even the union of the blocks requested by the different clients. We present two-phase ISCOD algorithms: the server first creates ad-hoc error-correction sets based on its knowledge of client states; next, it uses erasure-correction codes to construct the data for transmission. Each client uses its cached data and the received supplemental data to derive its requested blocks. The result is up to a several-fold reduction in the amount of transmitted supplemental data. Also, we define k-partial cliques in a directed graph and cast ISCOD in terms of partial-clique covers.", "", "This paper studies linear interference networks, both wired and wireless, with no channel state information at the transmitters except a coarse knowledge of the end-to-end one-hop topology of the network that only allows a distinction between weak (zero) and significant (nonzero) channels and no further knowledge of the channel coefficients' realizations. The network capacity (wired) and degrees of freedom (DoF) (wireless) are found to be bounded above by the capacity of an index coding problem for which the antidote graph is the complement of the given interference graph. The problems are shown to be equivalent under linear solutions. An interference alignment perspective is then used to translate the existing index coding solutions into the wired network capacity and wireless network DoF solutions, as well as to find new and unified solutions to different classes of all three problems.", "Motivated by applications in distributed storage and distributed computation, we introduce embedded index coding (EIC). EIC is a type of distributed index coding in which nodes in a distributed system act as both senders and receivers of information. We show how embedded index coding is related to index coding in general, and give characterizations and bounds on the communication costs of optimal embedded index codes. We also define task-based EIC, in which each sending node encodes and sends data blocks independently of the other nodes. Task-based EIC is more computationally tractable and has advantages in applications such as distributed storage, in which senders may complete their broadcasts at different times. Finally, we give heuristic algorithms for approximating optimal embedded index codes, and demonstrate empirically that these algorithms perform well." ] }
1906.09783
2953292887
Given a weighted graph @math , a partition of @math is @math -bounded if the diameter of each cluster is bounded by @math . A distribution over @math -bounded partitions is a @math -padded decomposition if every ball of radius @math is contained in a single cluster with probability at least @math . The weak diameter of a cluster @math is measured w.r.t. distances in @math , while the strong diameter is measured w.r.t. distances in the induced graph @math . The decomposition is weak strong according to the diameter guarantee. Formerly, it was proven that @math free graphs admit weak decompositions with padding parameter @math , while for strong decompositions only @math padding parameter was known. Furthermore, for the case of a graph @math , for which the induced shortest path metric @math has doubling dimension @math , a weak @math -padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong @math -padded decompositions for @math free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension @math we construct a strong @math -padded decomposition, which is also tight. We use this decomposition to construct @math -sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles.
Other than padded decompositions, separating decompositions have been studied. Here, instead of analyzing the probability to cut a ball, we analyze the probability to cut an edge @cite_37 @cite_15 @cite_38 @cite_10 . Separating decompositions been used to minimize the number of inter-cluster edges in a partition. In particular, strong diameter version of such partitions were used for SDD solvers @cite_7 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_15", "@cite_10" ], "mid": [ "2038964811", "2092534058", "1991838331", "1998544343", "2768461089" ], "abstract": [ "In the 0-extension problem, we are given a weighted graph with some nodes marked as terminals and a semimetric on the set of terminals. Our goal is to assign the rest of the nodes to terminals so as to minimize the sum, over all edges, of the product of the edge's weight and the distance between the terminals to which its endpoints are assigned. This problem generalizes the multiway cut problem of [SIAM J. Comput. , 23 (1994), pp. 864--894] and is closely related to the metric labeling problem introduced by Kleinberg and Tardos [Proceedings of the 40th IEEE Annual Symposium on Foundations of Computer Science, New York, 1999, pp. 14--23]. We present approximation algorithms for 0-Extension . In arbitrary graphs, we present a O(log k)-approximation algorithm, k being the number of terminals. We also give O(1)-approximation guarantees for weighted planar graphs. Our results are based on a natural metric relaxation of the problem previously considered by Karzanov [European J. Combin., 19 (1998), pp. 71--101]. It is similar in flavor to the linear programming relaxation of Garg, Vazirani, and Yannakakis [SIAM J. Comput. , 25 (1996), pp. 235--251] for the multicut problem, and similar to relaxations for other graph partitioning problems. We prove that the integrality ratio of the metric relaxation is at least @math for a positive c for infinitely many k. Our results improve some of the results of Kleinberg and Tardos, and they further our understanding on how to use metric relaxations.", "The problem of simulating a synchronous network by an asynchronous network is investigated. A new simulation technique, referred to as a synchronizer, which is a new, simple methodology for designing efficient distributed algorithms in asynchronous networks, is proposed. The synchronizer exhibits a trade-off between its communication and time complexities, which is proved to be within a constant factor of the lower bound.", "We present the design and analysis of a nearly-linear work parallel algorithm for solving symmetric diagonally dominant (SDD) linear systems. On input an SDD n-by-n matrix A with m nonzero entries and a vector b, our algorithm computes a vector @math such that @math in @math work and @math depth for any ?>0, where A + denotes the Moore-Penrose pseudoinverse of A. The algorithm relies on a parallel algorithm for generating low-stretch spanning trees or spanning subgraphs. To this end, we first develop a parallel decomposition algorithm that in O(mlog O(1) n) work and polylogarithmic depth, partitions a graph with n nodes and m edges into components with polylogarithmic diameter such that only a small fraction of the original edges are between the components. This can be used to generate low-stretch spanning trees with average stretch O(n ? ) in O(mlog O(1) n) work and O(n ? ) depth for any ?>0. Alternatively, it can be used to generate spanning subgraphs with polylogarithmic average stretch in O(mlog O(1) n) work and polylogarithmic depth. We apply this subgraph construction to derive a parallel linear solver. By using this solver in known applications, our results imply improved parallel randomized algorithms for several problems, including single-source shortest paths, maximum flow, minimum-cost flow, and approximate maximum flow.", "Adecomposition of a graphG=(V,E) is a partition of the vertex set into subsets (calledblocks). Thediameter of a decomposition is the leastd such that any two vertices belonging to the same connected component of a block are at distance ≤d. In this paper we prove (nearly best possible) statements, of the form: Anyn-vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [4] it was shown that every graph has a decomposition into at mosts(n) blocks of diameter at mosts(n) for (s(n) = n^ O( n n) ). Using a technique of Awerbuch [3] and Awerbuch and Peleg [5], we improve this result by showing that every graph has a decomposition of diameterO (logn) intoO(logn) blocks. In addition, we give a randomized distributed algorithm that produces such a decomposition and runs in timeO(log2n). The construction can be parameterized to provide decompositions that trade-off between the number of blocks and the diameter. We show that this trade-off is nearly best possible, for two families of graphs: the first consists of skeletons of certain triangulations of a simplex and the second consists of grid graphs with added diagonals. The proofs in both cases rely on basic results in combinatorial topology, Sperner's lemma for the first class and Tucker's lemma for the second.", "In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; there exist metric spaces where any tree embedding must have distortion Ω(log n)-distortion. This problem lies at the heart of numerous approximation and online algorithms including ones for group Steiner tree, metric labeling, buy-at-bulk network design and metrical task system. Our result improves the performance guarantees for all of these problems." ] }
1906.09783
2953292887
Given a weighted graph @math , a partition of @math is @math -bounded if the diameter of each cluster is bounded by @math . A distribution over @math -bounded partitions is a @math -padded decomposition if every ball of radius @math is contained in a single cluster with probability at least @math . The weak diameter of a cluster @math is measured w.r.t. distances in @math , while the strong diameter is measured w.r.t. distances in the induced graph @math . The decomposition is weak strong according to the diameter guarantee. Formerly, it was proven that @math free graphs admit weak decompositions with padding parameter @math , while for strong decompositions only @math padding parameter was known. Furthermore, for the case of a graph @math , for which the induced shortest path metric @math has doubling dimension @math , a weak @math -padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong @math -padded decompositions for @math free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension @math we construct a strong @math -padded decomposition, which is also tight. We use this decomposition to construct @math -sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles.
@cite_13 constructed strong diameter partitions for general graphs, which they later used to construct spanners and hop-sets in parallel and distributed regimes (see also @cite_16 ). Hierarchical partitions with strong diameter had been studied and used for constructing distributions over spanning trees with small expected distortion @cite_20 @cite_32 , Ramsey spanning trees @cite_36 and for universal Steiner trees @cite_27 . Another type of partitions studied is when we require only weak diameter, and in addition for each cluster to be connected @cite_33 @cite_29 .
{ "cite_N": [ "@cite_33", "@cite_36", "@cite_29", "@cite_32", "@cite_27", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2725976336", "2964083604", "", "2924819758", "2104116126", "2963530558", "2044112344", "2569942321" ], "abstract": [ "Given a capacitated graph @math and a set of terminals @math , how should we produce a graph @math only on the terminals @math so that every (multicommodity) flow between the terminals in @math could be supported in @math with low congestion, and vice versa? (Such a graph @math is called a flow sparsifier for @math .) What if we want @math to be a “simple” graph? What if we allow @math to be a convex combination of simple graphs? Improving on results of Moitra [Proceedings of the 50th IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Los Alamitos, CA, 2009, pp. 3--12] and Leighton and Moitra [Proceedings of the 42nd ACM Symposium on Theory of Computing, ACM, New York, 2010, pp. 47--56], we give efficient algorithms for constructing (a) a flow sparsifier @math that maintains congestion up to a factor of @math , where @math ; (b) a convex combination of trees over the terminals @math that maintains congestion up to a factor of @math ; (c) for a planar graph...", "The metric Ramsey problem asks for the largest subset S of a metric space that can be embedded into an ultrametric (more generally into a Hilbert space) with a given distortion. Study of this problem was motivated as a non-linear version of Dvoretzky theorem. Mendel and Naor [MN07] devised the so called Ramsey Partitions to address this problem, and showed the algorithmic applications of their techniques to approximate distance oracles and ranking problems. In this paper we study the natural extension of the metric Ramsey problem to graphs, and introduce the notion of Ramsey Spanning Trees. We ask for the largest subset S ⊆ V of a given graph G = (V, E), such that there exists a spanning tree of G that has small stretch for S. Applied iteratively, this provides a small collection of spanning trees, such that each vertex has a tree providing low stretch paths to all other vertices. The union of these trees serves as a special type of spanner, a tree-padding spanner. We use this spanner to devise the first compact stateless routing scheme with O(1) routing decision time, and labels which are much shorter than in all currently existing schemes. We first revisit the metric Ramsey problem, and provide a new deterministic construction. We prove that for every k, any n-point metric space has a subset S of size at least n1--1 k which embeds into an ultrametric with distortion 8k. We use this result to obtain the state-of-the-art deterministic construction of a distance oracle. Building on this result, we prove that for every k, any n-vertex graph G = (V, E) has a subset S of size at least n1--1 k, and a spanning tree of G, that has stretch O(k log log n) between any point in S and any point in V.", "", "We prove that any weighted graph @math with @math points and @math edges has a spanning tree @math such that @math . Moreover, such a tree...", "We study the problem of constructing universal Steiner trees for undirected graphs. Given a graph @math and a root node @math , we seek a single spanning tree @math of minimum stretch , where the stretch of @math is defined to be the maximum ratio, over all terminal sets @math , of the cost of the minimal sub-tree @math of @math that connects @math to @math to the cost of an optimal Steiner tree connecting @math to @math in @math . Universal Steiner trees (USTs) are important for data aggregation problems where computing the Steiner tree from scratch for every input instance of terminals is costly, as for example in low energy sensor network applications. We provide a polynomial time UST construction for general graphs with @math -stretch. We also give a polynomial time polylogarithmic-stretch construction for minor-free graphs. One basic building block of our algorithms is a hierarchy of graph partitions, each of which guarantees small strong diameter for each cluster and bounded neighbourhood intersections for each node. We show close connections between the problems of constructing USTs and building such graph partitions. Our construction of partition hierarchies for general graphs is based on an iterative cluster merging procedure, while the one for minor-free graphs is based on a separator theorem for such graphs and the solution to a cluster aggregation problem that may be of independent interest even for general graphs. To our knowledge, this is the first sub polynomial-stretch ( @math for any @math ) UST construction for general graphs, and the first polylogarithmic-stretch UST construction for minor-free graphs.", "[43] devised a distributed1 algorithm in the CONGEST model, that given a parameter k = 1, 2, ..., constructs an O(k)-spanner of an input unweighted n-vertex graph with O(n1+1 k) expected edges in O(k) rounds of communication. In this paper we improve the result of [43], by showing a k-round distributed algorithm in the same model, that constructs a (2k − 1)-spanner with O(n1+1 k ϵ) edges, with probability 1 − ϵ, for any ϵ > 0. Moreover, when k = ω(log n), our algorithm produces (still in k rounds) ultra-sparse spanners, i.e., spanners of size n(1 + o(1)), with probability 1 − o(1). To our knowledge, this is the first distributed algorithm in the CONGEST or in the PRAM models that constructs spanners or skeletons (i.e., connected spanning subgraphs) that sparse. Our algorithm can also be implemented in linear time in the standard centralized model, and for large k, it provides spanners that are sparser than any other spanner given by a known (near-)linear time algorithm. We also devise improved bounds (and algorithms realizing these bounds) for (1 + ϵ, β)-spanners and emulators. In particular, we show that for any unweighted n-vertex graph and any ϵ > 0, there exists a [EQUATION]-emulator with O(n) edges. All previous constructions of (1 + ϵ, β)-spanners and emulators employ a superlinear number of edges, for all choices of parameters. Finally, we provide some applications of our results to approximate shortest paths' computation in unweighted graphs.", "We use exponential start time clustering to design faster parallel graph algorithms involving distances. Previous algorithms usually rely on graph decomposition routines with strict restrictions on the diameters of the decomposed pieces. We weaken these bounds in favor of stronger local probabilistic guarantees. This allows more direct analyses of the overall process, giving: Linear work parallel algorithms that construct spanners with O(k) stretch and size O(n1+1 k) in unweighted graphs, and size O(n1+1 k log k) in weighted graphs. Hopsets that lead to the first parallel algorithm for approximating shortest paths in undirected graphs with O(m poly log n) work.", "We show that every weighted connected graph @math contains as a subgraph a spanning tree into which the edges of @math can be embedded with average stretch @math . Moreover, we show that this tree can be constructed in time @math in general, and in time @math if the input graph is unweighted. The main ingredient in our construction is a novel graph decomposition technique. Our new algorithm can be immediately used to improve the running time of the recent solver for symmetric diagonally dominant linear systems of Spielman and Teng from @math to @math , and to @math when the system is planar. Our result can also be used to improve several earlier approximation algorithms that use low-stretch spanning trees." ] }
1906.09783
2953292887
Given a weighted graph @math , a partition of @math is @math -bounded if the diameter of each cluster is bounded by @math . A distribution over @math -bounded partitions is a @math -padded decomposition if every ball of radius @math is contained in a single cluster with probability at least @math . The weak diameter of a cluster @math is measured w.r.t. distances in @math , while the strong diameter is measured w.r.t. distances in the induced graph @math . The decomposition is weak strong according to the diameter guarantee. Formerly, it was proven that @math free graphs admit weak decompositions with padding parameter @math , while for strong decompositions only @math padding parameter was known. Furthermore, for the case of a graph @math , for which the induced shortest path metric @math has doubling dimension @math , a weak @math -padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong @math -padded decompositions for @math free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension @math we construct a strong @math -padded decomposition, which is also tight. We use this decomposition to construct @math -sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles.
Padded decompositions were studied for additional graph families. Kamma and Krauthgamer @cite_39 showed that treewidth @math graphs are weakly @math -decomposable. @cite_25 showed that treewidth @math graphs are strongly @math -decomposable and strongly @math -decomposable. @cite_25 also showed that pathwidth @math graphs are strongly @math - decomposable. Finally @cite_25 proved that genus @math graphs are strongly @math -decomposable, improving a previous weak diameter version of Lee and Sidiropoulos @cite_40 .
{ "cite_N": [ "@cite_40", "@cite_25", "@cite_39" ], "mid": [ "1968429800", "2098322943", "2259676875" ], "abstract": [ "We study the quantitative geometry of graphs in terms of their genus, using the structure of certain \"cut graphs,\" i.e. subgraphs whose removal leaves a planar graph. In particular, we give optimal bounds for random partitioning schemes, as well as various types of embeddings. Using these geometric primitives, we present exponentially improved dependence on genus for a number of problems like approximate max-flow min-cut theorems, approximations for uniform and nonuniform Sparsest Cut, treewidth approximation, Laplacian eigenvalue bounds, and Lipschitz extension theorems and related metric labeling problems. We list here a sample of these improvements. All the following statements refer to graphs of genus g, unless otherwise noted. • We show that such graphs admit an O(log g)-approximate multi-commodity max-flow min-cut theorem for the case of uniform demands. This bound is optimal, and improves over the previous bound of O(g) [KPR93, FT03]. For general demands, we show that the worst possible gap is O(log g + CP), where CP is the gap for planar graphs. This dependence is optimal, and already yields a bound of O(log g + √log n), improving over the previous bound of O(√g log n) [KLMN04]. • We give an O(√log g)-approximation for the uniform Sparsest Cut, balanced vertex separator, and treewidth problems, improving over the previous bound of O(g) [FHL05]. • If a graph G has genus g and maximum degree D, we show that the kth Laplacian eigenvalue of G is (log g)2 · O(kgD n), improving over the previous bound of g2·O(kgD n) [KLPT09]. There is a lower bound of Ω(kgD n), making this result almost tight. • We show that if (X, d) is the shortest-path metric on a graph of genus g and S ⊆ X, then every L-Lipschitz map f: S → Z into a Banach space Z admits an O(L log g)-Lipschitz extension f: X → Z. This improves over the previous bound of O(Lg) [LN05], and compares to a lower bound of Ω(L√log g). In a related way, we show that there is an O(log g)-approximation for the 0-extension problem on such graphs, improving over the previous O(g) bound. • We show that every n-vertex shortest-path metric on a graph of genus g embeds into L2 with distortion O(log g + √log n), improving over the previous bound of O(√g log n). Our result is asymptotically optimal for every dependence g = g(n).", "We prove that any graph excluding Kr as a minor has can be partitioned into clusters of diameter at most Δ while removing at most O(r Δ) fraction of the edges. This improves over the results of Fakcharoenphol and Talwar, who building on the work of Klein, Plotkin and Rao gave a partitioning that required to remove O(r2 Δ) fraction of the edges. Our result is obtained by a new approach that relates the topological properties (excluding a minor) of a graph to its geometric properties (the induced shortest path metric). Specifically, we show that techniques used by Andreae in his investigation of the cops and robbers game on graphs excluding a fixed minor, can be used to construct padded decompositions of the metrics induced by such graphs. In particular, we get probabilistic partitions with padding parameter O(r) and strong-diameter partitions with padding parameter O(r2) for Kr-free graphs, O(k) for treewidth-k graphs, and O(log g) for graphs with genus g.", "A prominent tool in many problems involving metric spaces is a notion of randomized low-diameter decomposition. Loosely speaking, ( )-decomposition refers to a probability distribution over partitions of the metric into sets of low diameter, such that nearby points (parameterized by ( >0 )) are likely to be “clustered” together. Applying this notion to the shortest-path metric in edge-weighted graphs, it is known that n-vertex graphs admit an (O( n) )-padded decomposition (Bartal, 37th annual symposium on foundations of computer science. IEEE, pp 184–193, 1996), and that excluded-minor graphs admit O(1)-padded decomposition (, 25th annual ACM symposium on theory of computing, pp 682–690, 1993; Fakcharoenphol and Talwar, J Comput Syst Sci 69(3), 485–497, 2004; , Proceedings of the 46th annual ACM symposium on theory of computing. STOC ’14, pp 79–88. ACM, New York, NY, USA, 2014). We design decompositions to the family of p-path-separable graphs, which was defined by Abraham and Gavoille (Proceedings of the twenty-fifth annual acm symposium on principles of distributed computing, PODC ’06, pp 188–197, 2006) and refers to graphs that admit vertex-separators consisting of at most p shortest paths in the graph. Our main result is that every p-path-separable n-vertex graph admits an (O( (p n)) )-decomposition, which refines the (O( n) ) bound for general graphs, and provides new bounds for families like bounded-treewidth graphs. Technically, our clustering process differs from previous ones by working in (the shortest-path metric of) carefully chosen subgraphs." ] }
1906.09880
2951916138
Efficient and truthful mechanisms to price resources on remote servers machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.
Much recent work has focused on designing efficient mechanisms for pricing cloud resources. @cite_14 recently studied time-of-use'' pricing mechanisms, to match demand to supply with deadlines and online arrivals. Their result assumes large-capacity servers, and seeks to maximize welfare. @cite_21 provides a mechanism for preemptive scheduling with deadlines, maximizing the total value of completed jobs. Another possible objective for the design of incentive-compatible scheduling mechanisms is the total value of completed jobs, which have release times and deadlines. @cite_15 solves this problem in an online setting, while @cite_3 , in the offline setting for parallel machines, and @cite_1 , in the online competitive setting with uncertain supply. @cite_5 focuses on social welfare maximization for non-preemptive scheduling on multiple servers, and obtains a constant competitive ratio as the number of servers increases. Our work differs from these by considering stochastic job types, and revenue maximization. @cite_12 addresses computing a price menu for revenue maximization with different machines. Finally, @cite_22 proposes a system architecture for scheduling and pricing in cloud computing.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_21", "@cite_1", "@cite_3", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2583876468", "2594881807", "2092461546", "136334786", "", "2057239927", "2153817930", "2339462164" ], "abstract": [ "We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm.", "Cloud computing has reached significant maturity from a systems perspective, but currently deployed solutions rely on rather basic economics mechanisms that yield suboptimal allocation of the costly hardware resources. In this paper we present Economic Resource Allocation (ERA), a complete framework for scheduling and pricing cloud resources, aimed at increasing the efficiency of cloud resources usage by allocating resources according to economic principles. The ERA architecture carefully abstracts the underlying cloud infrastructure, enabling the development of scheduling and pricing algorithms independently of the concrete lower-level cloud infrastructure and independently of its concerns. Specifically, ERA is designed as a flexible layer that can sit on top of any cloud system and interfaces with both the cloud resource manager and with the users who reserve resources to run their jobs. The jobs are scheduled based on prices that are dynamically calculated according to the predicted demand. Additionally, ERA provides a key internal API to pluggable algorithmic modules that include scheduling, pricing and demand prediction. We provide a proof-of-concept software and demonstrate the effectiveness of the architecture by testing ERA over both public and private cloud systems -- Azure Batch of Microsoft and Hadoop YARN. A broader intent of our work is to foster collaborations between economics and system communities. To that end, we have developed a simulation platform via which economics and system experts can test their algorithmic implementations.", "We study online mechanisms for preemptive scheduling with deadlines, with the goal of maximizing the total value of completed jobs. This problem is fundamental to deadline-aware cloud scheduling, but there are strong lower bounds even for the algorithmic problem without incentive constraints. However, these lower bounds can be circumvented under the natural assumption of deadline slackness, i.e., that there is a guaranteed lower bound s > 1 on the ratio between a job's size and the time window in which it can be executed. In this paper, we construct a truthful scheduling mechanism with a constant competitive ratio, given slackness s > 1. Furthermore, we show that if s is large enough then we can construct a mechanism that also satisfies a commitment property: it can be determined whether or not a job will finish, and the requisite payment if so, well in advance of each job's deadline. This is notable because, in practice, users with strict deadlines may find it unacceptable to discover only very close to their deadline that their job has been rejected.", "We design new algorithms for the problem of allocating uncertain flexible, and multi-unit demand online given uncertain supply, in order to maximise social welfare. The algorithms can be seen as extensions of the expectation and consensus algorithms from the domain of online scheduling. The problem is especially relevant to the future smart grid, where uncertain output from renewable generators and conventional supply need to be integrated and matched to flexible, non-preemptive demand. To deal with uncertain supply and demand, the algorithms generate multiple scenarios which can then be solved offline. Furthermore, we use a novel method of reweighting the scenarios based on their likelihood whenever new information about supply becomes available. An additional improvement allows the selection of multiple non-preemptive jobs at the same time. Finally, our main contribution is a novel online mechanism based on these extensions, where it is in the agents' best interest to truthfully reveal their preferences. The experimental evaluation of the extended algorithms and different variants of the mechanism show that both achieve more than 85 of the offline optimal economic efficiency. Importantly, the mechanism yields comparable efficiency, while, in contrast to the algorithms, it allows for strategic agents.", "", "We introduce a novel pricing and resource allocation approach for batch jobs on cloud systems. In our economic model, users submit jobs with a value function that specifies willingness to pay as a function of job due dates. The cloud provider in response allocates a subset of these jobs, taking into advantage the flexibility of allocating resources to jobs in the cloud environment. Focusing on social-welfare as the system objective (especially relevant for private or in-house clouds), we construct a resource allocation algorithm which provides a small approximation factor that approaches 2 as the number of servers increases. An appealing property of our scheme is that jobs are allocated non-preemptively, i.e., jobs run in one shot without interruption. This property has practical significance, as it avoids significant network and storage resources for checkpointing. Based on this algorithm, we then design an efficient truthful-in-expectation mechanism, which significantly improves the running complexity of black-box reduction mechanisms that can be applied to the problem, thereby facilitating its implementation in real systems.", "For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents.", "The public cloud \"infrastructure as a service\" market possesses unique features that make it difficult to predict long-run economic behavior. On the one hand, major providers buy their hardware from the same manufacturers, operate in similar locations and offer a similar menu of products. On the other hand, the competitors use different proprietary \"fabric\" to manage virtualization, resource allocation and data transfer. The menus offered by each provider involve a discrete number of choices (virtual machine sizes) and allow providers to locate in different parts of the price-quality space. We document this differentiation empirically by running benchmarking tests. This allows us to calibrate a model of firm technology. Firm technology is an input into our theoretical model of price-quality competition. The monopoly case highlights the importance of competition in blocking \"bad equilibrium\" where performance is intentionally slowed down or options are unduly limited. In duopoly, price competition is fierce, but prices do not converge to the same level because of price-quality differentiation. The model helps explain market trends, such the healthy operating profit margin recently reported by Amazon Web Services. Our empirically calibrated model helps not only explain price cutting behavior but also how providers can manage a profit despite predictions that the market \"should be\" totally commoditized." ] }
1906.09880
2951916138
Efficient and truthful mechanisms to price resources on remote servers machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.
Posted price mechanisms (PPM) have been introduced by @cite_9 and have gained attention due to their simplicity, robustness to collusion, and their ease of implementation in practice. One of the first theoretical results concerning PPM's is an asymptotic comparison to classical single-parameter mechanisms @cite_10 . They were later studied by @cite_18 for the objective of revenue maximization, and further strengthened by @cite_0 and @cite_19 . @cite_17 shows that sequential PPM's can @math -approximate social welfare for XOS valuation functions, if the price for an item is equal to the expected contribution of the item to the social welfare.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_0", "@cite_19", "@cite_10", "@cite_17" ], "mid": [ "2077124610", "2168656534", "2012672170", "1492207119", "", "1846856248" ], "abstract": [ "We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.", "We introduce take-it-or-leave-it auctions (TLAs) as an allocation mechanism that allows buyers to retain much of their private valuation information, yet generates close-to-optimal expected utility for the seller. We show that if each buyer receives at most one offer, each buyer's dominant strategy is to act truthfully. In more general TLAs, the buyers' optimal strategies are more intricate, and we derive the perfect Bayesian equilibrium for the game. We develop algorithms for finding the equilibrium and also for optimizing the offers so as to maximize the seller's expected utility. In several example settings we show that the seller's expected utility already is close to optimal for a small number of offers. As the number of buyers increases, the seller's expected utility increases, and becomes increasingly (but not monotonically) more competitive with Myerson's expected utility maximizing auction.", "Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a \"prophet\" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of @math matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most @math , and this factor is also tight. Beyond their interest as theorems about pure online algoritms or optimal stopping rules, these results also have applications to mechanism design. Our results imply improved bounds on the ability of sequential posted-price mechanisms to approximate optimal mechanisms in both single-parameter and multi-parameter Bayesian settings. In particular, our results imply the first efficiently computable constant-factor approximations to the Bayesian optimal revenue in certain multi-parameter settings.", "Prophet inequalities bound the reward of an online algorithm—or gambler—relative to the optimum offline algorithm—the prophet—in settings that involve making selections from a sequence of elements whose order is chosen adversarially but whose weights are random. The goal is to maximize total weight.", "", "We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC). We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity." ] }
1906.09880
2951916138
Efficient and truthful mechanisms to price resources on remote servers machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.
Sample complexity for revenue maximization was recently been studied in @cite_2 showing that a polynomial number of samples is sufficient to obtain near optimal Bayesian auction mechanisms. An approach based on statistical learning that allows to learn mechanisms with expected revenue arbitrarily close to optimal from a polynomial number of samples has been proposed in @cite_8 . The problems of learning simple auctions from samples has been studied in @cite_4 .
{ "cite_N": [ "@cite_8", "@cite_4", "@cite_2" ], "mid": [ "611768045", "2964344768", "2164143329" ], "abstract": [ "This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data. We introduce t-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing demands of expressivity and simplicity. We prove that such auctions have small representation error, in the sense that for every product distribution F over bidders' valuations, there exists a t-level auction with small t and expected revenue close to optimal. We show that the set of t-level auctions has modest pseudo-dimension (for polynomial t) and therefore leads to small learning error. One consequence of our results is that, in arbitrary single-parameter settings, one can learn a mechanism with expected revenue arbitrarily close to optimal from a polynomial number of samples.", "We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of “simple” auctions. Our framework captures the most prominent examples of “simple” auctions, including anonymous and non-anonymous item and bundle pricings, with either a single or multiple buyers. The first step of the framework is to show that the set of auction allocation rules have a low-dimensional representation. The second step shows that, across the subset of auctions that share the same allocations on a given set of samples, the auction revenue varies in a lowdimensional way. Our results imply that in typical scenarios where it is possible to compute a near-optimal simple auction with a known prior, it is also possible to compute such an auction with an unknown prior, given a polynomial number of samples.", "In the design and analysis of revenue-maximizing auctions, auction performance is typically measured with respect to a prior distribution over inputs. The most obvious source for such a distribution is past data. The goal of this paper is to understand how much data is necessary and sufficient to guarantee near-optimal expected revenue. Our basic model is a single-item auction in which bidders' valuations are drawn independently from unknown and nonidentical distributions. The seller is given m samples from each of these distributions \"for free\" and chooses an auction to run on a fresh sample. How large does m need to be, as a function of the number k of bidders and e 0, so that a (1 -- e)-approximation of the optimal revenue is achievable? We prove that, under standard tail conditions on the underlying distributions, m = poly(k, 1 e) samples are necessary and sufficient. Our lower bound stands in contrast to many recent results on simple and prior-independent auctions and fundamentally involves the interplay between bidder competition, non-identical distributions, and a very close (but still constant) approximation of the optimal revenue. It effectively shows that the only way to achieve a sufficiently good constant approximation of the optimal revenue is through a detailed understanding of bidders' valuation distributions. Our upper bound is constructive and applies in particular to a variant of the empirical Myerson auction, the natural auction that runs the revenue-maximizing auction with respect to the empirical distributions of the samples. To capture how our sample complexity upper bound depends on the set of allowable distributions, we introduce α-strongly regular distributions, which interpolate between the well-studied classes of regular (α = 0) and MHR (α = 1) distributions. We give evidence that this definition is of independent interest." ] }
1906.09683
2950281736
Compression has been an important research topic for many decades, to produce a significant impact on data transmission and storage. Recent advances have shown a great potential of learning image and video compression. Inspired from related works, in this paper, we present an image compression architecture using a convolutional autoencoder, and then generalize image compression to video compression, by adding an interpolation loop into both encoder and decoder sides. Our basic idea is to realize spatial-temporal energy compaction in learning image and video compression. Thereby, we propose to add a spatial energy compaction-based penalty into loss function, to achieve higher image compression performance. Furthermore, based on temporal energy distribution, we propose to select the number of frames in one interpolation loop, adapting to the motion characteristics of video contents. Experimental results demonstrate that our proposed image compression outperforms the latest image compression standard with MS-SSIM quality metric, and provides higher performance compared with state-of-the-art learning compression methods at high bit rates, which benefits from our spatial energy compaction approach. Meanwhile, our proposed video compression approach with temporal energy compaction can significantly outperform MPEG-4 and is competitive with commonly used H.264. Both our image and video compression can produce more visually pleasant results than traditional standards.
@math Recently, end-to-end image compression has attracted great attention. Some approaches proposed to use recurrent neural networks (RNNs) to encode the residual information between the raw image and the reconstructed images in several iterations, such as the work @cite_30 @cite_31 optimized by mean-squared error (MSE) or the work @cite_25 optimized by MS-SSIM @cite_20 . Some generative adversarial networks (GANs) based techniques are proposed in @cite_5 @cite_4 @cite_7 for high subjective reconstruction quality at extremely low bit rates. Other notable approaches include differentiable approximations of round-based quantization @cite_1 @cite_8 @cite_0 for end-to-end training, content-aware importance map @cite_26 , hyperprior entropy model @cite_2 and conditional probability models @cite_35 for entropy estimation.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_26", "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_31", "@cite_25", "@cite_20" ], "mid": [ "2276024283", "2962891349", "2604392022", "", "", "", "", "", "", "2963449488", "", "", "1580389772" ], "abstract": [ "A large fraction of Internet traffic is now driven by requests from mobile devices with relatively small screens and often stringent bandwidth requirements. Due to these factors, it has become the norm for modern graphics-heavy websites to transmit low-resolution, low-bytecount image previews (thumbnails) as part of the initial page load process to improve apparent page responsiveness. Increasing thumbnail compression beyond the capabilities of existing codecs is therefore a current research focus, as any byte savings will significantly enhance the experience of mobile device users. Toward this end, we propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional LSTM recurrent networks. Our models address the main issues that have prevented autoencoder neural networks from competing with existing image compression algorithms: (1) our networks only need to be trained once (not per-image), regardless of input image dimensions and the desired compression rate; (2) our networks are progressive, meaning that the more bits are sent, the more accurate the image reconstruction; and (3) the proposed architecture is at least as efficient as a standard purpose-trained autoencoder for a given number of bits. On a large-scale benchmark of 32 @math 32 thumbnails, our LSTM-based approaches provide better visual quality than (headerless) JPEG, JPEG2000 and WebP, with a storage size that is reduced by 10 or more.", "Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state-of-the-art in image compression. The key challenge in learning such networks is twofold: To deal with quantization, and to control the trade-off between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the rate-distortion trade-off for an image compression auto-encoder. The main idea is to directly model the entropy of the latent representation by using a context model: A 3D-CNN which learns a conditional probability model of the latent distribution of the auto-encoder. During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach, when measured in MS-SSIM, yields a state-of-the-art image compression system based on a simple convolutional auto-encoder.", "Lossy image compression is generally formulated as a joint rate-distortion optimization problem to learn encoder, quantizer, and decoder. Due to the non-differentiable quantizer and discrete entropy estimation, it is very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that: (i) the bit rate of the different parts of the image is adapted to local content, and (ii) the content-aware bit rate is allocated under the guidance of a content-weighted importance map. The sum of the importance map can thus serve as a continuous alternative of discrete entropy estimation to control compression rate. The binarizer is adopted to quantize the output of encoder and a proxy function is introduced for approximating binary operation in backward propagation to make it differentiable. The encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner. And a convolutional entropy encoder is further presented for lossless compression of importance map and binary codes. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.", "", "", "", "", "", "", "We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.", "", "", "The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method." ] }
1906.09785
2952495822
Trajectory replanning for quadrotors is essential to enable fully autonomous flight in unknown environments. Hierarchical motion planning frameworks, which combine path planning with path parameterization, are popular due to their time efficiency. However, the path planning cannot properly deal with non-static initial states of the quadrotor, which may result in non-smooth or even dynamically infeasible trajectories. In this paper, we present an efficient kinodynamic replanning framework by exploiting the advantageous properties of the B-spline, which facilitates dealing with the non-static state and guarantees safety and dynamical feasibility. Our framework starts with an efficient B-spline-based kinodynamic (EBK) search algorithm which finds a feasible trajectory with minimum control effort and time. To compensate for the discretization induced by the EBK search, an elastic optimization (EO) approach is proposed to refine the control point placement to the optimal location. Systematic comparisons against the state-of-the-art are conducted to validate the performance. Comprehensive onboard experiments using two different vision-based quadrotors are carried out showing the general applicability of the framework.
There is extensive literature on motion planning techniques for quadrotors from various perspectives, such as control-based methods @cite_42 @cite_6 @cite_23 , search-based methods @cite_19 @cite_8 @cite_37 @cite_20 , sampling-based methods @cite_12 @cite_33 @cite_34 @cite_28 @cite_30 and optimization-based methods @cite_54 @cite_0 @cite_17 @cite_41 . It is difficult to give a full literature review of all these techniques, so in this section, we choose the most relevant and organize them into two categories, namely, hierarchical motion planning techniques and kinodynamic motion planning techniques.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_33", "@cite_8", "@cite_28", "@cite_41", "@cite_54", "@cite_42", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_34", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "", "2113029345", "", "", "", "", "2414314951", "2024802078", "", "", "2963497136", "", "", "1777783943", "", "" ], "abstract": [ "", "In this paper, we present an algorithm for generating complex dynamically feasible maneuvers for autonomous vehicles traveling at high speeds over large distances. Our approach is based on performing anytime incremental search on a multi-resolution, dynamically feasible lattice state space. The resulting planner provides real-time performance and guarantees on and control of the suboptimality of its solution. We provide theoretical properties and experimental results from an implementation on an autonomous passenger vehicle that competed in, and won, the Urban Challenge competition.", "", "", "", "", "We present an online method for generating collision-free trajectories for autonomous quadrotor flight through cluttered environments. We consider the real-world scenario that the quadrotor aerial robot is equipped with limited sensing and operates in initially unknown environments. During flight, an octree-based environment representation is incrementally built using onboard sensors. Utilizing efficient operations in the octree data structure, we are able to generate free-space flight corridors consisting of large overlapping 3-D grids in an online fashion. A novel optimization-based method then generates smooth trajectories that both are bounded entirely within the safe flight corridor and satisfy higher order dynamical constraints. Our method computes valid trajectories within fractions of a second on a moderately fast computer, thus permitting online re-generation of trajectories for reaction to new obstacles. We build a complete quadrotor testbed with onboard sensing, state estimation, mapping, and control, and integrate the proposed method to show online navigation through complex unknown environments.", "This paper presents LQG-Obstacles, a new concept that combines linear-quadratic feedback control of mobile robots with guaranteed avoidance of collisions with obstacles. Our approach generalizes the concept of Velocity Obstacles [3] to any robotic system with a linear Gaussian dynamics model. We integrate a Kalman filter for state estimation and an LQR feedback controller into a closed-loop dynamics model of which a higher-level control objective is the “control input”. We then define the LQG-Obstacle as the set of control objectives that result in a collision with high probability. Selecting a control objective outside the LQG-Obstacle then produces collision-free motion. We demonstrate the potential of LQG-Obstacles by safely and smoothly navigating a simulated quadrotor helicopter with complex non-linear dynamics and motion and sensing uncertainty through three-dimensional environments with obstacles and narrow passages.", "", "", "In this work, we propose a search-based planning method to compute dynamically feasible trajectories for a quadrotor flying in an obstacle-cluttered environment. Our approach searches for smooth, minimum-time trajectories by exploring the map using a set of short-duration motion primitives. The primitives are generated by solving an optimal control problem and induce a finite lattice discretization on the state space which can be explored using a graph-search algorithm. The proposed approach is able to generate resolution-complete (i.e., optimal in the discretized space), safe, dynamically feasibility trajectories efficiently by exploiting the explicit solution of a Linear Quadratic Minimum Time problem. It does not assume a hovering initial condition and, hence, is suitable for fast online re-planning while the robot is moving. Quadrotor navigation with online re-planning is demonstrated using the proposed approach in simulation and physical experiments and comparisons with trajectory generation based on state-of-art quadratic programming are presented.", "", "", "During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms, e.g., in terms of a given cost function, have been established so far. The purpose of this paper is to fill this gap, by designing efficient incremental samplingbased algorithms with provable optimality properties. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path returned by RRT converges almost surely to a non-optimal value, as the number of samples increases. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path returned by RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called RRT∗, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorithms and the theory of random geometric graphs. In terms of computational complexity, it is shown that the number of simple operations required by both the RRG and RRT∗ algorithms is asymptotically within a constant factor of that required by RRT.", "", "" ] }
1906.09785
2952495822
Trajectory replanning for quadrotors is essential to enable fully autonomous flight in unknown environments. Hierarchical motion planning frameworks, which combine path planning with path parameterization, are popular due to their time efficiency. However, the path planning cannot properly deal with non-static initial states of the quadrotor, which may result in non-smooth or even dynamically infeasible trajectories. In this paper, we present an efficient kinodynamic replanning framework by exploiting the advantageous properties of the B-spline, which facilitates dealing with the non-static state and guarantees safety and dynamical feasibility. Our framework starts with an efficient B-spline-based kinodynamic (EBK) search algorithm which finds a feasible trajectory with minimum control effort and time. To compensate for the discretization induced by the EBK search, an elastic optimization (EO) approach is proposed to refine the control point placement to the optimal location. Systematic comparisons against the state-of-the-art are conducted to validate the performance. Comprehensive onboard experiments using two different vision-based quadrotors are carried out showing the general applicability of the framework.
Two pioneering works @cite_24 @cite_22 extract waypoints from the geometric path and formulate the trajectory generation problem as quadratic programming (QP) on polynomial coefficients. These methods are based on the differential flatness of the quadrotor @cite_24 . Due to the deviation of the polynomial trajectory from the straight-line collision-free path, an iterative waypoint insertion scheme is adopted @cite_27 . However, how many additional waypoints are needed is not quantified. Chen @cite_54 propose a corridor-based geometric planner based on the octree-based map structure @cite_58 . The control effort can be reduced by generating the trajectory in a series of connected cubes. Apart from that, they propose an iterative process of adding constraints on polynomial extremas to cope with the deviation from the corridor, and prove that a finite number of iterations is needed to guarantee safety. Liu @cite_44 further generalize the corridor representation to a series of connected convex polygons.
{ "cite_N": [ "@cite_22", "@cite_54", "@cite_24", "@cite_44", "@cite_27", "@cite_58" ], "mid": [ "", "2414314951", "2162991084", "2587415290", "2482392012", "2726894975" ], "abstract": [ "", "We present an online method for generating collision-free trajectories for autonomous quadrotor flight through cluttered environments. We consider the real-world scenario that the quadrotor aerial robot is equipped with limited sensing and operates in initially unknown environments. During flight, an octree-based environment representation is incrementally built using onboard sensors. Utilizing efficient operations in the octree data structure, we are able to generate free-space flight corridors consisting of large overlapping 3-D grids in an online fashion. A novel optimization-based method then generates smooth trajectories that both are bounded entirely within the safe flight corridor and satisfy higher order dynamical constraints. Our method computes valid trajectories within fractions of a second on a moderately fast computer, thus permitting online re-generation of trajectories for reaction to new obstacles. We build a complete quadrotor testbed with onboard sensing, state estimation, mapping, and control, and integrate the proposed method to show online navigation through complex unknown environments.", "We address the controller design and the trajectory generation for a quadrotor maneuvering in three dimensions in a tightly constrained setting typical of indoor environments. In such settings, it is necessary to allow for significant excursions of the attitude from the hover state and small angle approximations cannot be justified for the roll and pitch. We develop an algorithm that enables the real-time generation of optimal trajectories through a sequence of 3-D positions and yaw angles, while ensuring safe passage through specified corridors and satisfying constraints on velocities, accelerations and inputs. A nonlinear controller ensures the faithful tracking of these trajectories. Experimental results illustrate the application of the method to fast motion (5–10 body lengths second) in three-dimensional slalom courses.", "There is extensive literature on using convex optimization to derive piece-wise polynomial trajectories for controlling differential flat systems with applications to three-dimensional flight for Micro Aerial Vehicles. In this work, we propose a method to formulate trajectory generation as a quadratic program (QP) using the concept of a Safe Flight Corridor (SFC). The SFC is a collection of convex overlapping polyhedra that models free space and provides a connected path from the robot to the goal position. We derive an efficient convex decomposition method that builds the SFC from a piece-wise linear skeleton obtained using a fast graph search technique. The SFC provides a set of linear inequality constraints in the QP allowing real-time motion planning. Because the range and field of view of the robot's sensors are limited, we develop a framework of Receding Horizon Planning , which plans trajectories within a finite footprint in the local map, continuously updating the trajectory through a re-planning process. The re-planning process takes between 50 to 300 ms for a large and cluttered map. We show the feasibility of our approach, its completeness and performance, with applications to high-speed flight in both simulated and physical experiments using quadrotors.", "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.", "In this paper, we present an improved octree-based mapping framework for autonomous navigation of mobile robots. Octree is best known for its memory efficiency for representing large-scale environments. However, existing implementations, including the state-of-the-art OctoMap [1], are computationally too expensive for online applications that require frequent map updates and inquiries. Utilizing the sparse nature of the environment, we propose a ray tracing method with early termination for efficient probabilistic map update. We also propose a divide-and-conquer volume occupancy inquiry method which serves as the core operation for generation of free-space configurations for optimization-based trajectory generation. We experimentally demonstrate that our method maintains the same storage advantage of the original OctoMap, but being computationally more efficient for map update and occupancy inquiry. Finally, by integrating the proposed map structure in a complete navigation pipeline, we show autonomous quadrotor flight through complex environments." ] }
1906.09785
2952495822
Trajectory replanning for quadrotors is essential to enable fully autonomous flight in unknown environments. Hierarchical motion planning frameworks, which combine path planning with path parameterization, are popular due to their time efficiency. However, the path planning cannot properly deal with non-static initial states of the quadrotor, which may result in non-smooth or even dynamically infeasible trajectories. In this paper, we present an efficient kinodynamic replanning framework by exploiting the advantageous properties of the B-spline, which facilitates dealing with the non-static state and guarantees safety and dynamical feasibility. Our framework starts with an efficient B-spline-based kinodynamic (EBK) search algorithm which finds a feasible trajectory with minimum control effort and time. To compensate for the discretization induced by the EBK search, an elastic optimization (EO) approach is proposed to refine the control point placement to the optimal location. Systematic comparisons against the state-of-the-art are conducted to validate the performance. Comprehensive onboard experiments using two different vision-based quadrotors are carried out showing the general applicability of the framework.
Despite the fact that the efficiency of the kinodynamic planning techniques keeps improving @cite_43 @cite_5 , it is still prohibitively expensive for replanning. Allen @cite_46 work towards a real-time kinodynamic planning framework by combining FMT* @cite_1 with a support vector machine (SVM) for the classification of the reachable set. This framework @cite_46 reduces the calling of the BVP solver to gain efficiency. However, the solution quality largely depends on the number of states pre-sampled. On the other hand, Liu @cite_19 explore the search-based kinodynamic planning counterpart and develop efficient heuristics by solving a linear quadratic minimum time problem. Their solution is resolution-complete with respect to the discretization on the control input, and achieves near real-time performance. Note that both @cite_19 and @cite_46 use a simplified system model, i.e., a double or triple integrator, to reduce the computation complexity. However, the resultant trajectory only has limited continuity. To improve the smoothness, both @cite_19 and @cite_46 adopt trajectory reparameterization using the unconstrained QP formulation @cite_27 , which may break the dynamical feasibility and safety.
{ "cite_N": [ "@cite_1", "@cite_43", "@cite_19", "@cite_27", "@cite_5", "@cite_46" ], "mid": [ "1864112212", "1509540129", "2963497136", "2482392012", "", "2317831939" ], "abstract": [ "In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm FMT*. The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a 'lazy' dynamic programming recursion on a predetermined number of probabilistically drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms chiefly RRT and multiple-query algorithms chiefly PRM, and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds-the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order On −1 d+? , where n is the number of sampled points, d is the dimension of the configuration space, and ? is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our theoretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive.", "We present an approach for asymptotically optimal motion planning for kinodynamic systems with arbitrary nonlinear dynamics amid obstacles. Optimal sampling-based planners like RRT*, FMT*, and BIT* when applied to kinodynamic systems require solving a two-point boundary value problem (BVP) to perform exact connections between nodes in the tree. Two-point BVPs are non-trivial to solve, hence the prevalence of alternative approaches that focus on specific instances of kinodynamic systems, use approximate solutions to the two-point BVP, or use random propagation of controls. In this work, we explore the feasibility of exploiting recent advances in numerical optimal control and optimization to solve these two-point BVPs for arbitrary kinodynamic systems and how they can be integrated with existing optimal planning algorithms. We combine BIT* with a two-point BVP solver that uses sequential quadratic programming (SQP). We consider the problem of computing minimum-time trajectories. Since the duration of trajectories is not known a-priori, we include the time-step as part of the optimization to allow SQP to optimize over the duration of the trajectory while keeping the number of discrete steps fixed for every connection attempted. Our experiments indicate that using a two-point BVP solver in the inner-loop of BIT* is competitive with the state-of-the-art in sampling-based optimal planning that explicitly avoids the use of two-point BVP solvers.", "In this work, we propose a search-based planning method to compute dynamically feasible trajectories for a quadrotor flying in an obstacle-cluttered environment. Our approach searches for smooth, minimum-time trajectories by exploring the map using a set of short-duration motion primitives. The primitives are generated by solving an optimal control problem and induce a finite lattice discretization on the state space which can be explored using a graph-search algorithm. The proposed approach is able to generate resolution-complete (i.e., optimal in the discretized space), safe, dynamically feasibility trajectories efficiently by exploiting the explicit solution of a Linear Quadratic Minimum Time problem. It does not assume a hovering initial condition and, hence, is suitable for fast online re-planning while the robot is moving. Quadrotor navigation with online re-planning is demonstrated using the proposed approach in simulation and physical experiments and comparisons with trajectory generation based on state-of-art quadratic programming are presented.", "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.", "", "The objective of this paper is to present a full-stack, real-time kinodynamic planning framework and demonstrate it on a quadrotor for collision avoidance. Specifically, the proposed framework utilizes an offlineonline computation paradigm, neighborhood classification through machine learning, sampling-based motion planning with an optimal control distance metric, and trajectory smoothing to achieve real-time planning for aerial vehicles. The approach is demonstrated on a quadrotor navigating obstacles in an indoor space and stands as, arguably, one of the first demonstrations of full-online kinodynamic motion planning; exhibiting execution times under 1 3 of a second. For the quadrotor, a simplified dynamics model is used during the planning phase to accelerate online computation. A trajectory smoothing phase, which leverages the differentially flat nature of quadrotor dynamics, is then implemented to guarantee a dynamically feasible trajectory." ] }
1906.09613
2950268582
We examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression [, 2018, , 2018] and explore the creation of such fair models that adhere to data privacy guarantees (specifically differential privacy). For this approach, we consider two suites of use cases: the first is for optimizing convex performance measures of the confusion matrix (such as @math -mean and @math -mean); the second is for satisfying statistical definitions of algorithmic fairness (such as equalized odds, demographic parity, and the gini index of inequality). The reductions approach to fair optimization can be abstracted as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We present two differentially private algorithms: an @math exponential sampling algorithm and an @math algorithm that uses a linear optimizer to incrementally move toward the best decision. We analyze the privacy and utility guarantees of these empirical risk minimization algorithms. Compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the @math differentially private algorithm we present provides asymptotically better sample complexity guarantees. The technique of using an approximate linear optimizer oracle to achieve privacy might be applicable to other problems not considered in this paper. Finally, we show an algorithm-agnostic lower bound on the accuracy of any solution to the problem of @math or @math private constrained group-objective optimization.
: @cite_1 initiates the study of differentially private fair learning but only considers the equalized odds definition in the reductions approach to fair learning. @cite_11 discusses an agenda for subproblems that should be considered when trying to achieve data privacy for fair learning. Last, @cite_2 study how to learn models that are fair'' by encrypting sensitive attributes and using secure multiparty computation.
{ "cite_N": [ "@cite_2", "@cite_1", "@cite_11" ], "mid": [ "2530395818", "2902208421", "2785487418" ], "abstract": [ "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv.", "Motivated by settings in which predictive models may be required to be non-discriminatory with respect to certain attributes (such as race), but even collecting the sensitive attribute may be forbidden or restricted, we initiate the study of fair learning under the constraint of differential privacy. We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a 'fairness' condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a private implementation of the equalized odds post-processing approach of [, 2016]. This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'. Our second algorithm is a differentially private version of the oracle-efficient in-processing approach of [, 2018] that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm is more complex but need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time. We conclude with a brief experimental evaluation.", "" ] }
1906.09613
2950268582
We examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression [, 2018, , 2018] and explore the creation of such fair models that adhere to data privacy guarantees (specifically differential privacy). For this approach, we consider two suites of use cases: the first is for optimizing convex performance measures of the confusion matrix (such as @math -mean and @math -mean); the second is for satisfying statistical definitions of algorithmic fairness (such as equalized odds, demographic parity, and the gini index of inequality). The reductions approach to fair optimization can be abstracted as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We present two differentially private algorithms: an @math exponential sampling algorithm and an @math algorithm that uses a linear optimizer to incrementally move toward the best decision. We analyze the privacy and utility guarantees of these empirical risk minimization algorithms. Compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the @math differentially private algorithm we present provides asymptotically better sample complexity guarantees. The technique of using an approximate linear optimizer oracle to achieve privacy might be applicable to other problems not considered in this paper. Finally, we show an algorithm-agnostic lower bound on the accuracy of any solution to the problem of @math or @math private constrained group-objective optimization.
: In this paper, we focus on developing @math differentially private algorithms. Certain relaxations of statistical differential privacy exist. For example, in a recent work @cite_9 show new privacy amplification theorems using Renyi differential privacy as the definition. The @math differentially private algorithms presented in this paper can all be modified to be stated in terms of Renyi differential privacy and or concentrated differential privacy .
{ "cite_N": [ "@cite_9" ], "mid": [ "2888126159" ], "abstract": [ "Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied." ] }
1906.09765
2952661572
Advanced driver assistance systems (ADASs) were developed to reduce the number of car accidents by issuing driver alert or controlling the vehicle. In this paper, we tested the robustness of Mobileye, a popular external ADAS. We injected spoofed traffic signs into Mobileye to assess the influence of environmental changes (e.g., changes in color, shape, projection speed, diameter and ambient light) on the outcome of an attack. To conduct this experiment in a realistic scenario, we used a drone to carry a portable projector which projected the spoofed traffic sign on a driving car. Our experiments show that it is possible to fool Mobileye so that it interprets the drone carried spoofed traffic sign as a real traffic sign.
In this section, we describe related work on attacks against ADASs and provide an overview of adversarial attacks. The computer vision classifier is an integral ADAS component which is used to detect traffic signs from a video stream in an ADAS. Many of these classifiers are trained using deep learning techniques. Several studies created adversarial instances to trick such deep learning classifiers and showed that this type of classifier is vulnerable to spoofing attacks. @cite_6 demonstrated how perturbations that are often too small to be perceptible to humans can fool deep learning models. @cite_12 showed that they could embed two traffic signs in one traffic sign with a dedicated array of lens that causes a different traffic sign to appear depending on the angle of view. @cite_3 and @cite_8 showed that physical artifacts (e.g., stickers, graffiti) misled computer vision classifiers. In the abovementioned studies, the researchers only trained dedicated models by themselves and identified instances that could exploit them using white-box techniques. Furthermore, the researchers did not show the effectiveness of the attack against an off-the-shelf ADAS. In contrast, we demonstrate our attack against the Mobileye system and mislead it so it recognizes spoofed traffic signs using black-box techniques.
{ "cite_N": [ "@cite_3", "@cite_12", "@cite_6", "@cite_8" ], "mid": [ "2759471388", "2788820894", "2962700793", "2764216487" ], "abstract": [ "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle(field test) for the target classifier.", "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine realistic security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). Leveraging the concept of adversarial examples, we modify innocuous signs advertisements in the environment in such a way that they seem normal to human observers but are interpreted as the adversary's desired traffic sign by autonomous cars. Further, we pursue a fundamentally different perspective to attacking autonomous cars, motivated by the observation that the driver and vehicle-mounted camera see the environment from different angles (the camera commonly sees the road with a higher angle, e.g., from top of the car). We propose a novel attack against vehicular sign recognition systems: we create signs that change as they are viewed from different angles, and thus, can be interpreted differently by the driver and sign recognition. We extensively evaluate the proposed attacks under various conditions: different distances, lighting conditions, and camera angles. We first examine our attacks virtually, i.e., we check if the digital images of toxic signs can deceive the sign recognition system. Further, we investigate the effectiveness of attacks in real-world settings: we print toxic signs, install them in the environment, capture videos using a vehicle-mounted camera, and process them using our sign recognition pipeline.", "Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.", "An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. If adversarial examples existed that could fool a detector, they could be used to (for example) wreak havoc on roads populated with smart vehicles. Recently, we described our difficulties creating physical adversarial stop signs that fool a detector. More recently, produced a physical adversarial stop sign that fools a proxy model of a detector. In this paper, we show that these physical adversarial stop signs do not fool two standard detectors (YOLO and Faster RCNN) in standard configuration. 's construction relies on a crop of the image to the stop sign; this crop is then resized and presented to a classifier. We argue that the cropping and resizing procedure largely eliminates the effects of rescaling and of view angle. Whether an adversarial attack is robust under rescaling and change of view direction remains moot. We argue that attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - likely makes it difficult to make adversarial patterns. Finally, an adversarial pattern on a physical object that could fool a detector would have to be adversarial in the face of a wide family of parametric distortions (scale; view angle; box shift inside the detector; illumination; and so on). Such a pattern would be of great theoretical and practical interest. There is currently no evidence that such patterns exist." ] }
1906.09551
2953172087
In classification applications, we often want probabilistic predictions to reflect confidence or uncertainty. Dropout, a commonly used training technique, has recently been linked to Bayesian inference, yielding an efficient way to quantify uncertainty in neural network models. However, as previously demonstrated, confidence estimates computed with a naive implementation of dropout can be poorly calibrated, particularly when using convolutional networks. In this paper, through the lens of ensemble learning, we associate calibration error with the correlation between the models sampled with dropout. Motivated by this, we explore the use of structured dropout to promote model diversity and improve confidence calibration. We use the SVHN, CIFAR-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques. We also show the merit of structured dropout in a Bayesian active learning application.
Dropout was first introduced as a stochastic regularization technique for NNs @cite_42 . Inspired by the success of dropout, numerous variants have recently been proposed @cite_9 @cite_47 @cite_35 @cite_27 . Unlike regular dropout, most of these methods drop parts of the NNs in a structured manner. For instance, DropBlock @cite_41 applies dropout to small patches of the feature map in convolutional networks, SpatialDrop @cite_30 drops out entire channels, while Stochastic Depth Net @cite_21 drops out entire ResNet blocks. These methods were proposed to boost test time accuracy. In this paper, we show that these structured dropout techniques can be successfully applied to obtain better confidence estimates as well.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_41", "@cite_9", "@cite_42", "@cite_21", "@cite_27", "@cite_47" ], "mid": [ "1936750108", "2963975324", "2890166761", "4919037", "2095705004", "2331143823", "", "" ], "abstract": [ "Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient ‘position refinement’ model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].", "We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.", "Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that neurons in a contiguous region in convolutional layers are strongly correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where neurons in a contiguous region of a feature map are dropped together. Extensive experiments show that DropBlock works much better than dropout in regularizing convolutional networks. On ImageNet, DropBlock with ResNet-50 architecture achieves 77.65 accuracy, which is more than 1 improvement on the previous result of this architecture.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).", "", "" ] }
1906.09551
2953172087
In classification applications, we often want probabilistic predictions to reflect confidence or uncertainty. Dropout, a commonly used training technique, has recently been linked to Bayesian inference, yielding an efficient way to quantify uncertainty in neural network models. However, as previously demonstrated, confidence estimates computed with a naive implementation of dropout can be poorly calibrated, particularly when using convolutional networks. In this paper, through the lens of ensemble learning, we associate calibration error with the correlation between the models sampled with dropout. Motivated by this, we explore the use of structured dropout to promote model diversity and improve confidence calibration. We use the SVHN, CIFAR-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques. We also show the merit of structured dropout in a Bayesian active learning application.
As we discuss below, dropout can be thought of as performing approximate Bayesian inference @cite_48 @cite_46 @cite_22 @cite_32 @cite_7 and offer estimates of uncertainty. Many other approximate Bayesian inference techniques have also been proposed for NNs @cite_45 @cite_39 @cite_3 @cite_31 @cite_4 . However, these methods can demand a sophisticated implementation, are often harder to scale, and can suffer from sub-optimal performance @cite_14 . Another popular alternative to approximate the intractable posterior is Markov Chain Monte Carlo (MCMC) @cite_44 . More recently, stochastic gradient versions of MCMC were also proposed to allow scalability @cite_33 @cite_29 @cite_11 @cite_23 . Nevertheless, these methods are often computationally expensive, and sensitive to the choice of hyper-parameters. Lastly, there have been efforts to approximate the posterior with Laplace approximation @cite_2 @cite_28 . A related approach, the SWA-Gaussian @cite_34 is another technique for Gaussian posterior approximation using the Stochastic Weight Averaging (SWA) algorithm @cite_24 .
{ "cite_N": [ "@cite_22", "@cite_29", "@cite_3", "@cite_44", "@cite_2", "@cite_4", "@cite_48", "@cite_39", "@cite_23", "@cite_46", "@cite_7", "@cite_28", "@cite_32", "@cite_34", "@cite_14", "@cite_33", "@cite_24", "@cite_45", "@cite_31", "@cite_11" ], "mid": [ "", "", "", "2907020378", "2111051539", "", "2964059111", "", "", "2963266340", "", "", "", "2912168444", "2804017338", "2144193737", "2963173418", "2164411961", "", "" ], "abstract": [ "", "", "", "", "A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.", "", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.", "", "", "Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.", "", "", "", "We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of computer vision tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, and temperature scaling.", "Deep learning models often have more parameters than observations, and still perform well. This is sometimes described as a paradox. In this work, we show experimentally that despite their huge number of parameters, deep neural networks can compress the data losslessly . Such a compression viewpoint originally motivated the use of in neural networks Hinton,Schmidhuber1997 . However, we show that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. Better encoding methods, imported from the Minimum Description Length (MDL) toolbox, yield much better compression values on deep networks.", "Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals. The popularity of such methods has grown significantly in recent years. However, a limitation of HMC methods is the required gradient computation for simulation of the Hamiltonian dynamical system--such computation is infeasible in problems involving a large sample size or streaming data. Instead, we must rely on a noisy gradient estimate computed from a subset of the data. In this paper, we explore the properties of such a stochastic gradient HMC approach. Surprisingly, the natural implementation of the stochastic approximation can be arbitrarily bad. To address this problem we introduce a variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution. Results on simulated data validate our theory. We also provide an application of our methods to a classification task using neural networks and to online Bayesian matrix factorization.", "Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training, with essentially no computational overhead. We interpret this result by analyzing the geometry of SGD trajectories over the loss surfaces of deep neural networks. Moreover, we show that this Stochastic Weight Averaging (SWA) procedure finds much broader optima than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and ShakeShake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization in deep learning, and has almost no computational overhead.", "We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.", "", "" ] }
1906.09551
2953172087
In classification applications, we often want probabilistic predictions to reflect confidence or uncertainty. Dropout, a commonly used training technique, has recently been linked to Bayesian inference, yielding an efficient way to quantify uncertainty in neural network models. However, as previously demonstrated, confidence estimates computed with a naive implementation of dropout can be poorly calibrated, particularly when using convolutional networks. In this paper, through the lens of ensemble learning, we associate calibration error with the correlation between the models sampled with dropout. Motivated by this, we explore the use of structured dropout to promote model diversity and improve confidence calibration. We use the SVHN, CIFAR-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques. We also show the merit of structured dropout in a Bayesian active learning application.
There are also non-Bayesian techniques to obtain calibrated confidence estimates. For example, temperature scaling @cite_38 was empirically demonstrated to be quite effective in calibrating the predictions of a model. A related line of work uses an ensemble of several randomly-initialized NNs @cite_6 . This method, called , requires training and saving multiple NN models. It has also been demonstrated that an ensemble of snapshots of the trained model at different iterations can help obtain better uncertainty estimates @cite_13 . Compared to an explicit ensemble, this approach requires training only one model. Nevertheless, models at different iterations must all be saved in order to deploy the algorithm, which can be computationally demanding with very large models.
{ "cite_N": [ "@cite_38", "@cite_13", "@cite_6" ], "mid": [ "2964212410", "2893463703", "2963238274" ], "abstract": [ "Confidence calibration - the problem of predicting probability estimates representative of the true correctness likelihood - is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling - a single-parameter variant of Platt Scaling - is surprisingly effective at calibrating predictions.", "", "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_20 proposed rotation equivariant CNNs showing that rotation equivariance improved tumor detection on a challenging lymph node metastases dataset. The authors suggested a fully-convolutional patch-classification model that is equivariant to 90" rotations and reflection. The model has shown a notable advance on the Camelyon16 benchmark @cite_13 dataset.
{ "cite_N": [ "@cite_13", "@cite_20" ], "mid": [ "2772723798", "2806857275" ], "abstract": [ "Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4 [95 CI, 64.3 -80.4 ]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95 CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.", "We propose a new model for digital pathology segmentation, based on the observation that histopathology images are inherently symmetric under rotation and reflection. Utilizing recent findings on rotation equivariant CNNs, the proposed model leverages these symmetries in a principled manner. We present a visual analysis showing improved stability on predictions, and demonstrate that exploiting rotation equivariance significantly improves tumor detection performance on a challenging lymph node metastases dataset. We further present a novel derived dataset to enable principled comparison of machine learning models, in combination with an initial benchmark. Through this dataset, the task of histopathology diagnosis becomes accessible as a challenging benchmark for fundamental machine learning research." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_13 assessed the performance of automated deep learning algorithms at identifying metastases in hematoxylin and eosin–stained tissue regions of lymph nodes of women with breast cancer and compared it with pathologists’ diagnoses in a diagnostic setting. The experiments results revealed that some deep learning algorithms succeeded more excellent diagnostic performance than a panel of 11 pathologists competing in a simulation study intended to mimic regular pathology workflow; algorithm performance was comparable with a specialist pathologist interpreting whole-slide images without time constraints.
{ "cite_N": [ "@cite_13" ], "mid": [ "2772723798" ], "abstract": [ "Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4 [95 CI, 64.3 -80.4 ]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95 CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_12 implemented a two-stage AdaBoost-based classification for automatic prostate cancer detection and grading on hematoxylin and eosin-stained tissue images. The first stage named tissue component classification includes automatic tessellation of an image into superpixels utilizing a graph-cut based approach; extraction of superpixel appearance, morphometric and geometric features; and classification of superpixels in nine tissue component types based on the extracted features using modest AdaBoost. In the second stage, the authors classified cancer versus non-cancer and low-grade versus high-grade cancer utilizing tissue component labeling. The approach produced a 60-times reduction in data size and thus increasing processing efficiency — the results have shown 90
{ "cite_N": [ "@cite_12" ], "mid": [ "1993760967" ], "abstract": [ "Radical prostatectomy is performed on approximately 40 of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90 and 85 for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_6 implemented deep learning algorithms for lung cancer diagnosis on lung image database consortium (LIDC) database. The authors implemented a convolutional neural network, deep-belief network (DBN), stacked denoising autoencoder (SDAE). CNN architecture comprises eight hidden layers with odd-numbered convolutional layer and even-numbered pooling and sub-sampling. Each convolutional layer employed 12, 8, 6 feature maps and connected to pooling layers with the 5 x 5 kernel. The architecture of DBN was obtained by training and stacking four layers with each layer holding 100 restricted Boltzmann machine (RBM). The architecture of the SDAE model incorporates three layers SDAE with each autoencoder stacked on the top of each other and each autoencoder having 2000, 1000, and 400 hidden neurons with corruption level of 0.5. The highest accuracy of 0.8119 was obtained in using DBN.
{ "cite_N": [ "@cite_6" ], "mid": [ "2311857205" ], "abstract": [ "Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4 larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_14 first used unsupervised clustering and further used the deep neural network models guided by the clustered information to classify the breast cancer images @cite_18 into benign and malignant classes.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "2344480160", "2791915981" ], "abstract": [ "Today, medical image analysis papers require solid experiments to prove the usefulness of proposed methods. However, experiments are often performed on data selected by the researchers, which may come from different institutions, scanners, and populations. Different evaluation measures may be used, making it difficult to compare the methods. In this paper, we introduce a dataset of 7909 breast cancer histopathology images acquired on 82 patients, which is now publicly available from http: web.inf.ufpr.br vri breast-cancer-database . The dataset includes both benign and malignant images. The task associated with this dataset is the automated classification of these images in two classes, which would be a valuable computer-aided diagnosis tool for the clinician. In order to assess the difficulty of this task, we show some preliminary results obtained with state-of-the-art image classification systems. The accuracy ranges from 80 to 85 , showing room for improvement is left. By providing this dataset and a standardized evaluation protocol to the scientific community, we hope to gather researchers in both the medical and the machine learning field to advance toward this clinical application.", "Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00 is achieved on the 200 dataset, the best Precision value 96.00 is achieved on the 40 dataset, and the best F-Measure value is achieved on both the 40 and 100 datasets." ] }
1906.09587
2950240113
Pathologists find tedious to examine the status of the sentinel lymph node on a large number of pathological scans. The examination process of such lymph node which encompasses metastasized cancer cells is histopathologically organized. However, the task of finding metastatic tissues is gradual which is often challenging. In this work, we present our deep convolutional neural network based model validated on PatchCamelyon (PCam) benchmark dataset for fundamental machine learning research in histopathology diagnosis. We find that our proposed model trained with a semi-supervised learning approach by using pseudo labels on PCam-level significantly leads to better performances to strong CNN baseline on the AUC metric.
@cite_29 conducted a study utilizing results from deep learning algorithms for the detection of breast cancer metastasis in lymph nodes. The study involved reviewing 70 slides by six pathologists in two modes assisted, and unassisted wherein the deep learning mode was used to outline interesting regions in assisted mode. The study found that algorithm-assisted pathologists demonstrated higher accuracy than either the algorithm or the pathologist alone. @cite_7 proposed multiple magnification feature embedding (MMFE) as image tile prediction encoder and slice feature extractor. The method considered inputs image tiles in three resolution 256, 1024, and 4096 and scales to 256. The authors reported 78.1
{ "cite_N": [ "@cite_29", "@cite_7" ], "mid": [ "2897434820", "2938313339" ], "abstract": [ "Advances in the quality of whole-slide images have set the stage for the clinical use of digital images in anatomic pathology. Along with advances in computer image analysis, this raises the possibility for computer-assisted diagnostics in pathology to improve histopathologic interpretation and clin", "In recent years, the use of convolutional neural networks has made great success in the analysis of digital pathological images. However, due to the slower running speed of the model and the large amount of data in the single image, the model based on full sampling runs very slowly. It is of great significance to optimize the speed of the model. This paper proposes a method to complete the breast cancer detection by incomplete sampling of the features of the transfer learning output without network training. This method verified on Camelyon16 dataset. The experimental results show that while ensuring the accuracy of the model, it can greatly reduce the time for model construction and use." ] }
1811.03073
2900004973
This paper introduces Probabilistic Chekov (p-Chekov), a chance-constrained motion planning system that can be applied to high degree-of-freedom (DOF) robots under motion uncertainty and imperfect state information. Given process and observation noise models, it can find feasible trajectories which satisfy a user-specified bound over the probability of collision. Leveraging our previous work in deterministic motion planning which integrated trajectory optimization into a sparse roadmap framework, p-Chekov shows superiority in its planning speed for high-dimensional tasks. P-Chekov incorporates a linear-quadratic Gaussian motion planning approach into the estimation of the robot state probability distribution, applies quadrature theories to waypoint collision risk estimation, and adapts risk allocation approaches to assign allowable probabilities of failure among waypoints. Unlike other existing risk-aware planners, p-Chekov can be applied to high-DOF robotic planning tasks without the convexification of the environment. The experiment results in this paper show that this p-Chekov system can effectively reduce collision risk and satisfy user-specified chance constraints in typical real-world planning scenarios for high-DOF robots.
Risk-aware extensions of sampling-based planners are also popular in the motion planning field @cite_9 @cite_15 @cite_16 . However, their applications are often limited to car-like robots in simplified environments due to their disadvantages in planning speed @cite_12 and collision probability estimation ability for high-DOF robots in real-world complex environments. When the robot has high dimensionality, the collision checking happens in the 3D workspace whereas the planning happens in the high-dimensional configuration space. Mapping the free workspace into the configuration space is nontrivial, which hence becomes another barrier for high-dimensional risk-aware motion planning. The methods introduced in @cite_11 and @cite_22 take advantage of elliptical level sets of Gaussian distributions and the transformation of the environment to estimate waypoint collision probabilities under Gaussian noises. Nevertheless, these methods can not be trivially extended to high-DOF applications due to the difficulty of mapping between the workspace and the configuration space.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_15", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2087939536", "2097148013", "", "", "2899557249", "2002201291" ], "abstract": [ "We present a fast, analytical method for estimating the probability of collision of a motion plan for a mobile robot operating under the assumptions of Gaussian motion and sensing uncertainty. Estimating the probability of collision is an integral step in many algorithms for motion planning under uncertainty and is crucial for characterizing the safety of motion plans. Our method is computationally fast, enabling its use in online motion planning, and provides conservative estimates to promote safety. To improve accuracy, we use a novel method to truncate estimated a priori state distributions to account for the fact that the probability of collision at each stage along a plan is conditioned on the previous stages being collision free. Our method can be directly applied within a variety of existing motion planners to improve their performance and the quality of computed plans. We apply our method to a car-like mobile robot with second order dynamics and to a steerable medical needle in 3D and demonstrate that our method for estimating the probability of collision is orders of magnitude faster than naive Monte Carlo sampling methods and reduces estimation error by more than 25 compared to prior methods.", "Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation, disturbances, and modeling errors, as well as stochastic mode transitions such as component failures. Chance-constrained control takes into account uncertainty to ensure that the probability of failure, due to collision with obstacles, for example, is below a given threshold. In this paper, we present a novel method for chance-constrained predictive stochastic control of dynamic systems. The method approximates the distribution of the system state using a finite number of particles. By expressing these particles in terms of the control variables, we are able to approximate the original stochastic control problem as a deterministic one; furthermore, the approximation becomes exact as the number of particles tends to infinity. This method applies to arbitrary noise distributions, and for systems with linear or jump Markov linear dynamics, we show that the approximate problem can be solved using efficient mixed-integer linear-programming techniques. We also introduce an important weighting extension that enables the method to deal with low-probability mode transitions such as failures. We demonstrate in simulation that the new method is able to control an aircraft in turbulence and can control a ground vehicle while being robust to brake failures.", "", "", "We present an evaluation of several representative sampling-based and optimization-based motion planners, and then introduce an integrated motion planning system which incorporates recent advances in trajectory optimization into a sparse roadmap framework. Through experiments in 4 common application scenarios with 5000 test cases each, we show that optimization-based or sampling-based planners alone are not effective for realistic problems where fast planning times are required. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the combination of our sparse roadmap and trajectory optimization provides superior performance over other standard sampling-based planners combinations. By using a multi-query roadmap instead of generating completely new trajectories for each planning problem, our approach allows for extensions such as persistent control policy information associated with a trajectory across planning problems. Also, the sub-optimality resulting from the sparsity of roadmap, as well as the unexpected disturbances from the environment, can both be overcome by the real-time trajectory optimization process.", "In this paper we present LQG-MP (linear-quadratic Gaussian motion planning), a new approach to robot motion planning that takes into account the sensors and the controller that will be used during the execution of the robot’s path. LQG-MP is based on the linear-quadratic controller with Gaussian models of uncertainty, and explicitly characterizes in advance (i.e. before execution) the a priori probability distributions of the state of the robot along its path. These distributions can be used to assess the quality of the path, for instance by computing the probability of avoiding collisions. Many methods can be used to generate the required ensemble of candidate paths from which the best path is selected; in this paper we report results using rapidly exploring random trees (RRT). We study the performance of LQG-MP with simulation experiments in three scenarios: (A) a kinodynamic car-like robot, (B) multi-robot planning with differential-drive robots, and (C) a 6-DOF serial manipulator. We also present a method that applies Kalman smoothing to make paths Ck-continuous and apply LQG-MP to precomputed roadmaps using a variant of Dijkstra’s algorithm to efficiently find high-quality paths." ] }
1811.03073
2900004973
This paper introduces Probabilistic Chekov (p-Chekov), a chance-constrained motion planning system that can be applied to high degree-of-freedom (DOF) robots under motion uncertainty and imperfect state information. Given process and observation noise models, it can find feasible trajectories which satisfy a user-specified bound over the probability of collision. Leveraging our previous work in deterministic motion planning which integrated trajectory optimization into a sparse roadmap framework, p-Chekov shows superiority in its planning speed for high-dimensional tasks. P-Chekov incorporates a linear-quadratic Gaussian motion planning approach into the estimation of the robot state probability distribution, applies quadrature theories to waypoint collision risk estimation, and adapts risk allocation approaches to assign allowable probabilities of failure among waypoints. Unlike other existing risk-aware planners, p-Chekov can be applied to high-DOF robotic planning tasks without the convexification of the environment. The experiment results in this paper show that this p-Chekov system can effectively reduce collision risk and satisfy user-specified chance constraints in typical real-world planning scenarios for high-DOF robots.
In order to address these difficulties in high-DOF robot motion planning tasks, the p-Chekov introduced in this paper combines the ideas from the Chekov roadmap + TrajOpt'' planner @cite_12 , Linear-Quadratic Gaussian motion planning (LQG-MP) @cite_11 , quadrature theories @cite_19 , and risk allocation @cite_6 @cite_4 . P-Chekov improves upon the isolated Chekov by extracting conflicts from the planning failures in order to guide TrajOpt @cite_8 to find better solutions. It applies the LQG-MP approach to estimate the state probability distributions, but differs from LQG-MP in that it aims at generating feasible trajectories in real-time that satisfy a specified risk bound and meet a local optimality criterion, instead of choosing the minimum risk trajectory among candidate trajectories. P-Chekov relies on a quadrature-based sampling method to estimate the collision probability for individual waypoints, which mitigates the inaccuracy of random sampling and avoids the difficulty of mapping between configuration space and workspace.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_19", "@cite_12", "@cite_11" ], "mid": [ "", "2142224528", "2138693458", "2171369888", "2899557249", "2002201291" ], "abstract": [ "", "We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.", "When controlling dynamic systems, such as mobile robots in uncertain environments, there is a trade off between risk and reward. For example, a race car can turn a corner faster by taking a more challenging path. This paper proposes a new approach to planning a control sequence with a guaranteed risk bound. Given a stochastic dynamic model, the problem is to find a control sequence that optimizes a performance metric, while satisfying chance constraints i.e. constraints on the upper bound of the probability of failure. We propose a two-stage optimization approach, with the upper stage optimizing the risk allocation and the lower stage calculating the optimal control sequence that maximizes reward. In general, the upper-stage is a non-convex optimization problem, which is hard to solve. We develop a new iterative algorithm for this stage that efficiently computes the risk allocation with a small penalty to optimality. The algorithm is implemented and tested on the autonomous underwater vehicle (AUV) depth planning problem, and demonstrates a substantial improvement in computation cost and suboptimality, compared to the prior arts.", "", "We present an evaluation of several representative sampling-based and optimization-based motion planners, and then introduce an integrated motion planning system which incorporates recent advances in trajectory optimization into a sparse roadmap framework. Through experiments in 4 common application scenarios with 5000 test cases each, we show that optimization-based or sampling-based planners alone are not effective for realistic problems where fast planning times are required. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the combination of our sparse roadmap and trajectory optimization provides superior performance over other standard sampling-based planners combinations. By using a multi-query roadmap instead of generating completely new trajectories for each planning problem, our approach allows for extensions such as persistent control policy information associated with a trajectory across planning problems. Also, the sub-optimality resulting from the sparsity of roadmap, as well as the unexpected disturbances from the environment, can both be overcome by the real-time trajectory optimization process.", "In this paper we present LQG-MP (linear-quadratic Gaussian motion planning), a new approach to robot motion planning that takes into account the sensors and the controller that will be used during the execution of the robot’s path. LQG-MP is based on the linear-quadratic controller with Gaussian models of uncertainty, and explicitly characterizes in advance (i.e. before execution) the a priori probability distributions of the state of the robot along its path. These distributions can be used to assess the quality of the path, for instance by computing the probability of avoiding collisions. Many methods can be used to generate the required ensemble of candidate paths from which the best path is selected; in this paper we report results using rapidly exploring random trees (RRT). We study the performance of LQG-MP with simulation experiments in three scenarios: (A) a kinodynamic car-like robot, (B) multi-robot planning with differential-drive robots, and (C) a 6-DOF serial manipulator. We also present a method that applies Kalman smoothing to make paths Ck-continuous and apply LQG-MP to precomputed roadmaps using a variant of Dijkstra’s algorithm to efficiently find high-quality paths." ] }
1811.03120
2899763852
This paper tackles the challenge of colorizing grayscale images. We take a deep convolutional neural network approach, and choose to take the angle of classification, working on a finite set of possible colors. Similarly to a recent paper, we implement a loss and a prediction function that favor realistic, colorful images rather than "true" ones. We show that a rather lightweight architecture inspired by the U-Net, and trained on a reasonable amount of pictures of landscapes, achieves satisfactory results on this specific subset of pictures. We show that data augmentation significantly improves the performance and robustness of the model, and provide visual analysis of the prediction confidence. We show an application of our model, extending the task to video colorization. We suggest a way to smooth color predictions across frames, without the need to train a recurrent network designed for sequential inputs.
Authors of @cite_2 have shown that connections between hidden layers of a bottleneck neural network could enhance the performance greatly, by injecting locational information in the upsampling process, and improving the gradient flow. We hope that applying this method will allow us to train a colorizing model more quickly and more efficiently, with less parameters, and on a smaller dataset.
{ "cite_N": [ "@cite_2" ], "mid": [ "2952232639" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1811.03120
2899763852
This paper tackles the challenge of colorizing grayscale images. We take a deep convolutional neural network approach, and choose to take the angle of classification, working on a finite set of possible colors. Similarly to a recent paper, we implement a loss and a prediction function that favor realistic, colorful images rather than "true" ones. We show that a rather lightweight architecture inspired by the U-Net, and trained on a reasonable amount of pictures of landscapes, achieves satisfactory results on this specific subset of pictures. We show that data augmentation significantly improves the performance and robustness of the model, and provide visual analysis of the prediction confidence. We show an application of our model, extending the task to video colorization. We suggest a way to smooth color predictions across frames, without the need to train a recurrent network designed for sequential inputs.
Part of the challenges that are interesting but not yet tackled in the literature involve videos. General information propagation frameworks in a video involving bilateral networks as discussed in @cite_10 could be seen as a good starting point to implement consistent colorization of picture sequences, if we manage to embed the colorizing information and then propagate it as any other information. The work realized in @cite_4 is also interesting since it tackles video stylization by grouping the frames, choosing a representative frame for each group and using the output of the network on that frame as a guideline, which enhances temporal consistency greatly. However, the adaptation of such an algorithm to the much more complex task of image colorization is far beyond the scope of this project.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "2562457735", "2136154655" ], "abstract": [ "We propose a technique that propagates information forward through video data. The method is conceptually simple and can be applied to tasks that require the propagation of structured information, such as semantic labels, based on video content. We propose a Video Propagation Network that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of color propagation in a grayscale video.", "Colorization is a computer-assisted process of adding color to a monochrome image or movie. The process typically involves segmenting images into regions and tracking these regions across image sequences. Neither of these tasks can be performed reliably in practice; consequently, colorization requires considerable user intervention and remains a tedious, time-consuming, and expensive task.In this paper we present a simple colorization method that requires neither precise image segmentation, nor accurate region tracking. Our method is based on a simple premise; neighboring pixels in space-time that have similar intensities should have similar colors. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. In our approach an artist only needs to annotate the image with a few color scribbles, and the indicated colors are automatically propagated in both space and time to produce a fully colorized image or sequence. We demonstrate that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input." ] }
1811.03120
2899763852
This paper tackles the challenge of colorizing grayscale images. We take a deep convolutional neural network approach, and choose to take the angle of classification, working on a finite set of possible colors. Similarly to a recent paper, we implement a loss and a prediction function that favor realistic, colorful images rather than "true" ones. We show that a rather lightweight architecture inspired by the U-Net, and trained on a reasonable amount of pictures of landscapes, achieves satisfactory results on this specific subset of pictures. We show that data augmentation significantly improves the performance and robustness of the model, and provide visual analysis of the prediction confidence. We show an application of our model, extending the task to video colorization. We suggest a way to smooth color predictions across frames, without the need to train a recurrent network designed for sequential inputs.
Actually, one promising way to perform image colorization is to be able to learn meaningful color-related representations for the images (which often involves using very deep and heavy or pretrained architecture as in @cite_8 ) and then ensure the temporal consistency of them.
{ "cite_N": [ "@cite_8" ], "mid": [ "2950064337" ], "abstract": [ "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning." ] }
1811.03316
2963482940
Compressed sensing has been employed to reduce the pilot overhead for channel estimation in wireless communication systems. Particularly, structured turbo compressed sensing (STCS) provides a generic framework for structured sparse signal recovery with reduced computational complexity and storage requirement. In this paper, we consider the problem of massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) channel estimation in a frequency division duplexing (FDD) downlink system. By exploiting the structured sparsity in the angle-frequency domain (AFD) and angle-delay domain (ADD) of the massive MIMO-OFDM channel, we represent the channel by using AFD and ADD probability models and design message-passing-based channel estimators under the STCS framework. Several STCS-based algorithms are proposed for massive MIMO-OFDM channel estimation by exploiting the structured sparsity. We show that, compared with other existing algorithms, the proposed algorithms have a much faster convergence speed and achieve competitive error performance under a wide range of simulation settings.
In the recent work @cite_3 , the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements was explored from a Bayesian perspective. Compared to @cite_3 , the novelty of our work is as follows. First, we aim to extend STCS for the channel estimation of massive MIMO-OFDM by exploiting structured channel sparsity in the angle-delay domain, while @cite_3 aims to solve the dynamic CS problem of recovering sparse, correlated, time-varying signals. Second, the probability process in this paper is used to model the clustering property of the channel coefficient support in the angle-frequency domain and angle-delay domain, while the Markov process is used to model the time-varying coefficient support and the time-varying coefficient amplitudes in @cite_3 . Third, the Turbo-CS algorithm is used in our paper, while approximate message passing (AMP) was used to recover the sparse signal in @cite_3 . The turbo-CS algorithm has lower complexity and exhibits faster convergence speed than the AMP algorithm @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_3" ], "mid": [ "2604470139", "1965384704" ], "abstract": [ "Turbo compressed sensing (Turbo-CS) is an efficient iterative algorithm for sparse signal recovery with partial orthogonal sensing matrices. In this paper, we extend the Turbo-CS algorithm to solve compressed sensing problems involving a more general signal structure, including compressive image recovery and low-rank matrix recovery. A main difficulty for such an extension is that the original Turbo-CS algorithm requires a prior knowledge of the signal distribution that is usually unavailable in practice. To overcome this difficulty, we propose to redesign the Turbo-CS algorithm by employing a generic denoiser that does not depend on the prior distribution, and hence the name denoising-based Turbo-CS (D-Turbo-CS). We then derive the extrinsic information for a generic denoiser by following the Turbo-CS principle. Based on that, we optimize the parametric extrinsic denoisers to minimize the output mean-square error (MSE). Explicit expressions are derived for the extrinsic SURE-LET denoiser used in image denoising and also for the singular value thresholding denoiser used in low-rank matrix denoising. We find that the dynamics of D-Turbo-CS can be well described by a scaler recursion called MSE evolution, similar to the case for Turbo-CS. Numerical results demonstrate that D-Turbo-CS considerably outperforms the counterpart algorithms in both reconstruction quality and running time.", "In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems." ] }
1811.03316
2963482940
Compressed sensing has been employed to reduce the pilot overhead for channel estimation in wireless communication systems. Particularly, structured turbo compressed sensing (STCS) provides a generic framework for structured sparse signal recovery with reduced computational complexity and storage requirement. In this paper, we consider the problem of massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) channel estimation in a frequency division duplexing (FDD) downlink system. By exploiting the structured sparsity in the angle-frequency domain (AFD) and angle-delay domain (ADD) of the massive MIMO-OFDM channel, we represent the channel by using AFD and ADD probability models and design message-passing-based channel estimators under the STCS framework. Several STCS-based algorithms are proposed for massive MIMO-OFDM channel estimation by exploiting the structured sparsity. We show that, compared with other existing algorithms, the proposed algorithms have a much faster convergence speed and achieve competitive error performance under a wide range of simulation settings.
Compared with the recent work @cite_29 , the novelty of our work consists of the following aspects. First, we employ a Markov model to efficiently exploit the clustered sparsity of the massive MIMO channel in the angle-frequency domain and angle-delay domain, while @cite_29 uses the nearest neighbor sparsity pattern learning (NNSPL) algorithm first proposed in @cite_12 to exploit the sparsity structure. Note that @cite_19 uses the NNSPL algorithm to exploit the angle domain sparsity of the channel, while @cite_8 uses the NNSPL algorithm to exploit the delay domain sparsity. @cite_29 presents a comprehensive version of the NNSPL algorithm to jointly handle the angle-delay domain sparsity. Second, STCS-FS achieves a considerably lower mean square error (MSE) performance than the NNSPL algorithm with frequency support, while STCS-DS performs slightly better than the NNSPL algorithm with delay support. Third, the computational complexity of STCS is much lower than that of NNSPL. In this regard, we show that the per-iteration complexity of STCS is lower than that of NNSPL; we further show that STCS converges much faster than NNSPL.
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_12", "@cite_8" ], "mid": [ "2582192222", "2795172492", "2230684453", "2508523245" ], "abstract": [ "In this letter, we reveal that in the massive multiple-input multiple-output system with large bandwidth, sub-channels of orthogonal frequency division multiplexing share approximately sparse common support due to the frequency difference of subcarriers. We use the approximate message passing with nearest neighbor sparsity pattern learning (AMP-NNSPL) algorithm to adaptively learn the underlying structure for improving the accuracy of channel estimation, where the learning strategy is newly derived by solving an optimization problem. In addition, the performance of the AMP-NNSPL is well predicted by the state evolution. Simulation results demonstrate the superiority of the algorithm in systems with large bandwidth.", "In millimeter wave (mm-wave) massive multiple-input multiple-output (MIMO) systems, acquiring accurate channel state information is essential for efficient beamforming (BF) and multiuser interference cancellation, which is a challenging task since a low signal-to-noise ratio is encountered before BF in large antenna arrays. The mm-wave channel exhibits a 3-D clustered structure in the virtual angle of arrival (AOA), angle of departure (AOD), and delay domain that is imposed by the effect of power leakage, angular spread, and cluster duration. We extend the approximate message passing (AMP) with a nearest neighbor pattern learning algorithm for improving the attainable channel estimation performance, which adaptively learns and exploits the clustered structure in the 3-D virtual AOA-AOD-delay domain. The proposed method is capable of approaching the performance bound described by the state evolution based on vector AMP framework, and our simulation results verify its superiority in mm-wave systems associated with a broad bandwidth.", "We consider the problem of recovering clustered sparse signals with no prior knowledge of the sparsity pattern. Beyond simple sparsity, signals of interest often exhibits an underlying sparsity pattern which, if leveraged, can improve the reconstruction performance. However, the sparsity pattern is usually unknown a priori. Inspired by the idea of k-nearest neighbor (k-NN) algorithm, we propose an efficient algorithm termed approximate message passing with nearest neighbor sparsity pattern learning (AMP-NNSPL), which learns the sparsity pattern adaptively. AMP-NNSPL specifies a flexible spike and slab prior on the unknown signal and, after each AMP iteration, sets the sparse ratios as the average of the nearest neighbor estimates via expectation maximization (EM). Experimental results on both synthetic and real data demonstrate the superiority of our proposed algorithm both in terms of reconstruction performance and computational complexity.", "To address the challenging problem of downlink channel estimation with low pilot overhead in massive multiple-input multiple-output (MIMO) systems, an empirical Bayesian block expectation propagation (EP) algorithm is proposed. Specifically, a block Bernoulli–Gaussian prior channel model is proposed to fit the underlying block sparsity, and a block EP algorithm is derived to estimate the channels more accurately by clustering all the channel taps that pertain to the same delay, while the model parameters are learned by minimizing the Bethe free energy. Simulation results show that the proposed algorithm achieves considerable reduction of pilot overhead in a massive MIMO system with tens of antennas, while maintaining superior channel estimation performance." ] }
1811.03204
2900395099
The log-concave maximum likelihood estimator (MLE) problem answers: for a set of points @math , which log-concave density maximizes their likelihood? We present a characterization of the log-concave MLE that leads to an algorithm with runtime @math to compute a log-concave distribution whose log-likelihood is at most @math less than that of the MLE, and @math is parameter of the problem that is bounded by the @math norm of the vector of log-likelihoods the MLE evaluated at @math .
Later work characterized the finite sample complexity of log-concave density estimation. First showed that no method can get closer than squared Hellinger distance @math (and indirectly, total variation distance) with @math samples, where @math hides logarithmic factors in @math @cite_5 . Later work demonstrated methods for learning log-concave distributions in total variation and squared Hellinger distances with bounded sample complexity. First showed a method that obtains sample complexity @math with respect to total variation distance @cite_4 . This work did use the log-concave MLE. later showed that the log-concave MLE is also effective for learning in squared Hellinger distance @cite_0 . The log-concave MLE was shown to converge to square Hellinger distance @math with @math with high probability, showing the log-concave MLE is nearly optimal in said metric. Both of these works are non constructive and do not provide efficient algorithms.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "2788831527", "2962732336", "2397544591" ], "abstract": [ "We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the first upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on @math , for all @math . Prior to this work, no finite sample upper bound was known for this estimator in more than @math dimensions. In more detail, we prove that for any @math and @math , given @math samples drawn from an unknown log-concave density @math on @math , the MLE outputs a hypothesis @math that with high probability is @math -close to @math , in squared Hellinger loss. A sample complexity lower bound of @math was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an @math factor.", "The research of Richard J. Samworth was supported by an EPSRC Early Career Fellowship and a grant from the Leverhulme Trust.", "We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on @math , for all @math . Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of @math . In more detail, we give an estimator that, for any @math and @math , draws @math samples from an unknown target log-concave density on @math , and outputs a hypothesis that (with high probability) is @math -close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of @math for this problem." ] }
1811.03301
2808353256
Dynamic security analysis is an important problem of power systems on ensuring safe operation and stable power supply even when certain faults occur. No matter if such faults are caused by vulnerabilities of system components, physical attacks, or cyber-attacks that are more related to cyber-security, they eventually affect the physical stability of a power system. Examples of the loss of physical stability include the Northeast Blackout of 2003 in North America and the 2015 system-wide blackout in Ukraine. The nonlinear hybrid nature, that is, nonlinear continuous dynamics integrated with discrete switching, and the high degree of freedom property of power system dynamics make it challenging to conduct the dynamic security analysis. In this article, we use the hybrid automaton model to describe the dynamics of a power system and mainly deal with the index-1 differential-algebraic equation models regarding the continuous dynamics in different discrete states. The analysis problem is formulated as a reachability problem of the associated hybrid model. A sampling-based algorithm is then proposed by integrating modeling and randomized simulation of the hybrid dynamics to search for a feasible execution connecting an initial state of the post-fault system and a target set in the desired operation mode. The proposed method enables the use of existing power system simulators for the synthesis of discrete switching and control strategies through randomized simulation. The effectiveness and performance of the proposed approach are demonstrated with an application to the dynamic security analysis of the New England 39-bus benchmark power system exhibiting hybrid dynamics. In addition to evaluating the dynamic security, the proposed method searches for a feasible strategy to ensure the dynamic security of the system in the face of disruptions.
The threat model of power systems has been studied in the context of CPS security under a unified framework @cite_58 , which consists the threats, vulnerabilities, attacks and controls from the security perspective and the cyber-, physical and cyber-physical components from the CPS components perspective. Cyber-security of power systems @cite_66 plays a significant role in managing power grid operations, due to the integration of information and communication technologies in power systems. The mechanism of typical cyber-attacks, such as false data injection attacks, data integrity attacks and DoS attacks, etc., has been studied using different models for power systems, together with detection and prevention strategies. In addition, testbeds @cite_42 have been established to evaluate vulnerabilities of smart grids.
{ "cite_N": [ "@cite_58", "@cite_42", "@cite_66" ], "mid": [ "2579603034", "2551233363", "2618656593" ], "abstract": [ "With the exponential growth of cyber-physical systems (CPSs), new security challenges have emerged. Various vulnerabilities, threats, attacks, and controls have been introduced for the new generation of CPS. However, there lacks a systematic review of the CPS security literature. In particular, the heterogeneity of CPS components and the diversity of CPS systems have made it difficult to study the problem with one generalized model. In this paper, we study and systematize existing research on CPS security under a unified framework. The framework consists of three orthogonal coordinates: 1) from the security perspective, we follow the well-known taxonomy of threats, vulnerabilities, attacks and controls; 2) from the CPS components perspective, we focus on cyber, physical, and cyber-physical components; and 3) from the CPS systems perspective, we explore general CPS features as well as representative systems (e.g., smart grids, medical CPS, and smart cars). The model can be both abstract to show general interactions of components in a CPS application, and specific to capture any details when needed. By doing so, we aim to build a model that is abstract enough to be applicable to various heterogeneous CPS applications; and to gain a modular view of the tightly coupled CPS components. Such abstract decoupling makes it possible to gain a systematic understanding of CPS security, and to highlight the potential sources of attacks and ways of protection. With this intensive literature review, we attempt to summarize the state-of-the-art on CPS security, provide researchers with a comprehensive list of references, and also encourage the audience to further explore this emerging field.", "An increasing interest is emerging on the development of smart grid cyber-physical system testbeds. As new communication and information technologies emerge, innovative cyber-physical system testbeds need to leverage realistic and scalable platforms. Indeed, the interdisciplinary structure of the smart grid concept compels heterogeneous testbeds with different capabilities. There is a significant need to evaluate new concepts and vulnerabilities as opposed to counting on solely simulation studies especially using hardware-in-the-loop test platforms. In this paper, we present a comprehensive survey on cyber-physical smart grid testbeds aiming to provide a taxonomy and insightful guidelines for the development as well as to identify the key features and design decisions while developing future smart grid testbeds. First, this survey provides a four step taxonomy based on smart grid domains, research goals, test platforms, and communication infrastructure. Then, we introduce an overview with a detailed discussion and an evaluation on existing testbeds from the literature. Finally, we conclude this paper with a look on future trends and developments in cyber-physical smart grid testbed research.", "This paper presents the application of cybersecurity to the operation and control of distributed electric power systems. In particular, the paper emphasizes the role of cybersecurity in the operation of microgrids and analyzes the dependencies of microgrid control and operation on information and communication technologies for cybersecurity. The paper discusses common cyber vulnerabilities in distributed electric power systems and presents the implications of cyber incidents on physical processes in microgrids. The paper examines the impacts of potential risks attributed to cyberattacks on microgrids and presents the affordable technologies for mitigating such risks. In addition, the paper presents a minimax-regret approach for minimizing the impending risks in managing microgrids. The paper also presents the opportunities provided by software-defined networking technologies to enhance the security of microgrid operations. It is concluded that cybersecurity could play a significant role in managing microgrid operations as microgrids strive for a higher degree of resilience as they supply power services to customers." ] }
1811.03301
2808353256
Dynamic security analysis is an important problem of power systems on ensuring safe operation and stable power supply even when certain faults occur. No matter if such faults are caused by vulnerabilities of system components, physical attacks, or cyber-attacks that are more related to cyber-security, they eventually affect the physical stability of a power system. Examples of the loss of physical stability include the Northeast Blackout of 2003 in North America and the 2015 system-wide blackout in Ukraine. The nonlinear hybrid nature, that is, nonlinear continuous dynamics integrated with discrete switching, and the high degree of freedom property of power system dynamics make it challenging to conduct the dynamic security analysis. In this article, we use the hybrid automaton model to describe the dynamics of a power system and mainly deal with the index-1 differential-algebraic equation models regarding the continuous dynamics in different discrete states. The analysis problem is formulated as a reachability problem of the associated hybrid model. A sampling-based algorithm is then proposed by integrating modeling and randomized simulation of the hybrid dynamics to search for a feasible execution connecting an initial state of the post-fault system and a target set in the desired operation mode. The proposed method enables the use of existing power system simulators for the synthesis of discrete switching and control strategies through randomized simulation. The effectiveness and performance of the proposed approach are demonstrated with an application to the dynamic security analysis of the New England 39-bus benchmark power system exhibiting hybrid dynamics. In addition to evaluating the dynamic security, the proposed method searches for a feasible strategy to ensure the dynamic security of the system in the face of disruptions.
Generally, state-of-the-art DSA tools rely heavily on deterministic computation methods @cite_4 , such as the aforementioned EEAC, BCU, SBS, etc. Security limits are searched based on exhaustively examining many predefined contingencies using a rigorous approach. Advantages of such methods are the accurate results available at the expense of computation time, since precise models of the system dynamics are used as the basis. With the development of data science, the probabilistic methods, including machine learning and artificial intelligence @cite_43 @cite_51 , are recently proposed again to DSA, utilizing accumulated knowledge and data. Some probabilistic models for the causal relation could be trained using certain learning methods based on data obtained via simulation, and the built models are further employed to conduct the on-line DSA. The advantage of such methods are the rapid results, while the reliability of the obtained models is still in study, and simulation still plays a fundamental role on supplying the data. In addition, a rigorous and concrete strategy to maintain the system security is not provided in the state-of-the-art tools and methods.
{ "cite_N": [ "@cite_51", "@cite_43", "@cite_4" ], "mid": [ "1502028139", "2549549046", "2017694201" ], "abstract": [ "Real-time transient stability prediction is an essential and challenging step of response-based transient stability emergency controls. Machine learning methods including decision trees and artificial neural networks have the potential to be applied to the problem. To counter the inefficiency of common machine learning methods in learning new information, an incremental learning algorithm is employed to train an artificial neural network for real-time transient stability prediction. The resulted learning framework can readily be integrated into on-line dynamic security assessment. The effectiveness of such prediction model is demonstrated by the simulation results of a practical power system.", "In recent years, complex operating conditions have greatly reduced the predictability of electric grid operations and hence, there is an urgent need to improve grid security more than ever before. The best approach would be to improve grid intelligence rather than simply hardening the grid. An implementation of dynamic security assessment (DSA) would require carrying out time-domain simulations that are computationally too involved to be performed in real-time. This paper presents an approach using machine learning (ML) techniques that would enable the grid to assess its current dynamic state instantaneously. A database consisting of steady-state operating points of the power system and outputs of time-domain simulations is generated in order to train and test the algorithm. A few operating points termed as “landmarks” are identified through a ranking methodology proposed in this paper. Finally, it is shown that prediction accuracy improves through use of such landmarks.", "Security refers to the degree of risk in a power system's ability to survive imminent disturbances (contingencies) without interruption to customer service. It relates to robustness of the system to imminent disturbances and, hence, depends on the system operating condition as well as the contingent probability of disturbances. DSA refers to the analysis required to determine whether or not a power system can meet specified reliability and security criteria in both transient and steady-state time frames for all credible contingencies. Ensuring security in the new environment requires the use of advanced power system analysis tools capable of comprehensive security assessment with due consideration to practical operating criteria. These tools must be able to model the system appropriately, compute security limits in a fast and accurate manner, and provide meaningful displays to system operators. Online dynamics security assessment can provide the first line of defense against widespread system disturbances by quickly scanning the system for potential problems and providing operators with actionable results. With the development of emerging technologies, such as wide-area PMs and ISs, online DSA is expected to become a dominant weapon against system blackouts." ] }
1811.03301
2808353256
Dynamic security analysis is an important problem of power systems on ensuring safe operation and stable power supply even when certain faults occur. No matter if such faults are caused by vulnerabilities of system components, physical attacks, or cyber-attacks that are more related to cyber-security, they eventually affect the physical stability of a power system. Examples of the loss of physical stability include the Northeast Blackout of 2003 in North America and the 2015 system-wide blackout in Ukraine. The nonlinear hybrid nature, that is, nonlinear continuous dynamics integrated with discrete switching, and the high degree of freedom property of power system dynamics make it challenging to conduct the dynamic security analysis. In this article, we use the hybrid automaton model to describe the dynamics of a power system and mainly deal with the index-1 differential-algebraic equation models regarding the continuous dynamics in different discrete states. The analysis problem is formulated as a reachability problem of the associated hybrid model. A sampling-based algorithm is then proposed by integrating modeling and randomized simulation of the hybrid dynamics to search for a feasible execution connecting an initial state of the post-fault system and a target set in the desired operation mode. The proposed method enables the use of existing power system simulators for the synthesis of discrete switching and control strategies through randomized simulation. The effectiveness and performance of the proposed approach are demonstrated with an application to the dynamic security analysis of the New England 39-bus benchmark power system exhibiting hybrid dynamics. In addition to evaluating the dynamic security, the proposed method searches for a feasible strategy to ensure the dynamic security of the system in the face of disruptions.
In the field of robotic path and motion planning, there have been a large number of variants of RRT, like RRT-connect @cite_59 , RRT @math @cite_48 , Linear Quadratic Regulation based RRT @math (LQR-RRT @math ) @cite_25 , etc., and implemented libraries on sampling-based algorithms, like Open Motion Planning Library (OMPL) @cite_26 . RRT-connect improves the efficiency of RRT by incrementally building searching trees rooted at the start and the goal configurations. RRT @math generates an asymptotically optimal trajectory by rewiring the tree as it discovers new lower-cost paths reaching the nodes that are already in the tree. LQR-RRT @math finds optimal plans in domains with complex or underactuated dynamics by locally linearizing the domain dynamics and applying linear quadratic regulation. However, neither of them is directly applicable to the power system application, due to the large scale and nonlinearity of power system dynamics.
{ "cite_N": [ "@cite_48", "@cite_26", "@cite_25", "@cite_59" ], "mid": [ "1777783943", "", "2166077797", "2141664020" ], "abstract": [ "During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms, e.g., in terms of a given cost function, have been established so far. The purpose of this paper is to fill this gap, by designing efficient incremental samplingbased algorithms with provable optimality properties. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path returned by RRT converges almost surely to a non-optimal value, as the number of samples increases. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path returned by RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called RRT∗, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorithms and the theory of random geometric graphs. In terms of computational complexity, it is shown that the number of simple operations required by both the RRG and RRT∗ algorithms is asymptotically within a constant factor of that required by RRT.", "", "The RRT* algorithm has recently been proposed as an optimal extension to the standard RRT algorithm [1]. However, like RRT, RRT* is difficult to apply in problems with complicated or underactuated dynamics because it requires the design of a two domain-specific extension heuristics: a distance metric and node extension method. We propose automatically deriving these two heuristics for RRT* by locally linearizing the domain dynamics and applying linear quadratic regulation (LQR). The resulting algorithm, LQR-RRT*, finds optimal plans in domains with complex or underactuated dynamics without requiring domain-specific design choices. We demonstrate its application in domains that are successively torque-limited, underactuated, and in belief space.", "A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two rapidly-exploring random trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through, the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented." ] }
1811.03130
2900289476
Over the past few years, extensive anecdotal evidence emerged that suggests the involvement of state-sponsored actors (or "trolls") in online political campaigns with the goal to manipulate public opinion and sow discord. Recently, Twitter and Reddit released ground truth data about Russian and Iranian state-sponsored actors that were active on their platforms. In this paper, we analyze these ground truth datasets across several axes to understand how these actors operate, how they evolve over time, who are their targets, how their strategies changed over time, and what is their influence to the Web's information ecosystem. Among other things we find: a) campaigns of these actors were influenced by real-world events; b) these actors were employing different tactics and had different targets over time, thus their automated detection is not straightforward; and c) Russian trolls were clearly pro-Trump, whereas Iranian trolls were anti-Trump. Finally, using Hawkes Processes, we quantified the influence that these actors had to four Web communities: Reddit, Twitter, 4chan's Politically Incorrect board ( pol ), and Gab, finding that Russian trolls were more influential than Iranians with the exception of pol .
We now review previous work on opinion manipulation as well as politically motivated disinformation on the Web. Opinion manipulation. The practice of swaying opinion in Web communities has become a hot-button issue as malicious actors are intensifying their efforts to push their subversive agenda. @cite_36 study how users create multiple accounts, called , that actively participate in some communities with the goal to manipulate users' opinions. @cite_34 show that trolls can indeed manipulate users' opinions in online forums. In follow-up work, Mihaylov and Nakov @cite_25 highlight two types of trolls: those paid to operate and those that are called out as such by other users. Then, Volkova and Bell @cite_15 aim to predict the deletion of Twitter accounts because they are trolls, focusing on those that shared content related to the Russia-Ukraine crisis.
{ "cite_N": [ "@cite_36", "@cite_15", "@cite_34", "@cite_25" ], "mid": [ "2604283646", "2512157846", "2252009349", "2513959045" ], "abstract": [ "In online discussion communities, users can interact and share information and opinions on a wide variety of topics. However, some users may create multiple identities, or sockpuppets, and engage in undesired behavior by deceiving others or manipulating discussions. In this work, we study sockpuppetry across nine discussion communities, and show that sockpuppets differ from ordinary users in terms of their posting behavior, linguistic traits, as well as social network structure. Sockpuppets tend to start fewer discussions, write shorter posts, use more personal pronouns such as I'', and have more clustered ego-networks. Further, pairs of sockpuppets controlled by the same individual are more likely to interact on the same discussion at the same time than pairs of ordinary users. Our analysis suggests a taxonomy of deceptive behavior in discussion communities. Pairs of sockpuppets can vary in their deceptiveness, i.e., whether they pretend to be different users, or their supportiveness, i.e., if they support arguments of other sockpuppets controlled by the same user. We apply these findings to a series of prediction tasks, notably, to identify whether a pair of accounts belongs to the same underlying user or not. Altogether, this work presents a data-driven view of deception in online discussion communities and paves the way towards the automatic detection of sockpuppets.", "", "The emergence of user forums in electronic news media has given rise to the proliferation of opinion manipulation trolls. Finding such trolls automatically is a hard task, as there is no easy way to recognize or even to define what they are; this also makes it hard to get training and testing data. We solve this issue pragmatically: we assume that a user who is called a troll by several people is likely to be one. We experiment with different variations of this definition, and in each case we show that we can train a classifier to distinguish a likely troll from a non-troll with very high accuracy, 82‐95 , thanks to our rich feature set.", "There are different definitions of what a troll is. Certainly, a troll can be somebody who teases people to make them angry, or somebody who offends people, or somebody who wants to dominate any single discussion, or somebody who tries to manipulate people’s opinion (sometimes for money), etc. The last definition is the one that dominates the public discourse in Bulgaria and Eastern Europe, and this is our focus in this paper. In our work, we examine two types of opinion manipulation trolls: paid trolls that have been revealed from leaked “reputation management contracts” and “mentioned trolls” that have been called such by several different people. We show that these definitions are sensible: we build two classifiers that can distinguish a post by such a paid troll from one by a non-troll with 81-82 accuracy; the same classifier achieves 81-82 accuracy on so called mentioned troll vs. non-troll posts." ] }
1811.03130
2900289476
Over the past few years, extensive anecdotal evidence emerged that suggests the involvement of state-sponsored actors (or "trolls") in online political campaigns with the goal to manipulate public opinion and sow discord. Recently, Twitter and Reddit released ground truth data about Russian and Iranian state-sponsored actors that were active on their platforms. In this paper, we analyze these ground truth datasets across several axes to understand how these actors operate, how they evolve over time, who are their targets, how their strategies changed over time, and what is their influence to the Web's information ecosystem. Among other things we find: a) campaigns of these actors were influenced by real-world events; b) these actors were employing different tactics and had different targets over time, thus their automated detection is not straightforward; and c) Russian trolls were clearly pro-Trump, whereas Iranian trolls were anti-Trump. Finally, using Hawkes Processes, we quantified the influence that these actors had to four Web communities: Reddit, Twitter, 4chan's Politically Incorrect board ( pol ), and Gab, finding that Russian trolls were more influential than Iranians with the exception of pol .
Howard and Kollanyi @cite_42 study the role of bots in Twitter conversations during the 2016 Brexit referendum. They find that most tweets are in favor of Brexit, that there are bots with various levels of automation, and that 1 , Hegelich and Janetzko @cite_4 investigate whether bots on Twitter are used as political actors. By exposing and analyzing 1.7K bots on Twitter during the Russia-Ukraine conflict, they uncover their political agenda and show that bots exhibit various behaviors, e.g., trying to hide their identity, promoting topics through the use of hashtags, and retweeting messages with particularly interesting content. @cite_18 aim to predict users that are likely to spread information from state-sponsored actors, while @cite_14 focus on the Facebook platform and analyze ads shared by Russian trolls in order to find the cues that make them effective. Finally, a large body of work focuses on social bots @cite_6 @cite_46 @cite_1 @cite_20 @cite_28 and their role in spreading political disinformation, highlighting that they can manipulate the public's opinion at a large scale, thus potentially affecting the outcome of political elections.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_28", "@cite_42", "@cite_1", "@cite_6", "@cite_46", "@cite_20" ], "mid": [ "2950434393", "2894309501", "2521978756", "2595521492", "", "1837843568", "2550819555", "2263846226", "2724523750" ], "abstract": [ "Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls' content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls' content (average AUC score of 96 , using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls' content or not.", "One of the key aspects of the United States democracy is free and fair elections that allow for a peaceful transfer of power from one President to the next. The 2016 US presidential election stands out due to suspected foreign influence before, during, and after the election. A significant portion of that suspected influence was carried out via social media. In this paper, we look specifically at 3,500 Facebook ads allegedly purchased by the Russian government. These ads were released on May 10, 2018 by the US Congress House Intelligence Committee. We analyzed the ads using natural language processing techniques to determine textual and semantic features associated with the most effective ones. We clustered the ads over time into the various campaigns and the labeled parties associated with them. We also studied the effectiveness of Ads on an individual, campaign and party basis. The most effective ads tend to have less positive sentiment, focus on past events and are more specific and personalized in nature. The more effective campaigns also show such similar characteristics. The campaigns' duration and promotion of the Ads suggest a desire to sow division rather than sway the election.", "A considerable amount of data in social networks like Twitter is not generated by humans but by automatic programs (bots). Some of these bots are mimicking humans (socialbots) and can hardly be identified. In this article, we analyze a social botnet involved in the Ukrainian Russian conflict. Based on text mining and unsupervised learning, we can identify three different behaviors: mimicry, window dressing, and reverberation.", "Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.", "", "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.", "Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election.", "While most online social media accounts are controlled by humans, these platforms also host automated agents called social bots or sybil accounts. Recent literature reported on cases of social bots imitating humans to manipulate discussions, alter the popularity of users, pollute content and spread misinformation, and even perform terrorist propaganda and recruitment actions. Here we present BotOrNot, a publicly-available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots. Since its release in May 2014, BotOrNot has served over one million requests via our website and APIs.", "Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation that occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts, posted between 27 April and 7 May 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn’t. Prior interests of disinformation adopters pinpoint to the reasons of scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots." ] }
1811.03291
2900085483
Text classification is a fundamental task in NLP applications. Latest research in this field has largely been divided into two major sub-fields. Learning representations is one sub-field and learning deeper models, both sequential and convolutional, which again connects back to the representation is the other side. We posit the idea that the stronger the representation is, the simpler classifier models are needed to achieve higher performance. In this paper we propose a completely novel direction to text classification research, wherein we convert text to a representation very similar to images, such that any deep network able to handle images is equally able to handle text. We take a deeper look at the representation of documents as an image and subsequently utilize very simple convolution based models taken as is from computer vision domain. This image can be cropped, re-scaled, re-sampled and augmented just like any other image to work with most of the state-of-the-art large convolution based models which have been designed to handle large image datasets. We show impressive results with some of the latest benchmarks in the related fields. We perform transfer learning experiments, both from text to text domain and also from image to text domain. We believe this is a paradigm shift from the way document understanding and text classification has been traditionally done, and will drive numerous novel research ideas in the community.
Another work which closely follows the ideas we discuss in this work is the neural attention based sentence summarization work by @cite_23 . They propose an attention framework to identify the abstractive summary for a given text block. The cost function is an NLM model with the input pair encoding being done by a simple attention mechanism. Again the simple attention mechanism comes close to the our work, but the NLM model makes the representation highly task specific.
{ "cite_N": [ "@cite_23" ], "mid": [ "1843891098" ], "abstract": [ "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines." ] }
1811.03255
2900064592
Recent studies have shown that frame-level deep speaker features can be derived from a deep neural network with the training target set to discriminate speakers by a short speech segment. By pooling the frame-level features, utterance-level representations, called d-vectors, can be derived and used in the automatic speaker verification (ASV) task. This simple average pooling, however, is inherently sensitive to the phonetic content of the utterance. An interesting idea borrowed from machine translation is the attention-based mechanism, where the contribution of an input word to the translation at a particular time is weighted by an attention score. This score reflects the relevance of the input word and the present translation. We can use the same idea to align utterances with different phonetic contents. This paper proposes a phonetic-attention scoring approach for d-vector systems. By this approach, an attention score is computed for each frame pair. This score reflects the similarity of the two frames in phonetic content, and is used to weigh the contribution of this frame pair in the utterance-based scoring. This new scoring approach emphasizes the frame pairs with similar phonetic contents, which essentially provides a soft alignment for utterances with any phonetic contents. Experimental results show that compared with the naive average pooling, this phonetic-attention scoring approach can deliver consistent performance improvement in ASV tasks of both text-dependent and text-independent.
Another work relevant to ours is the segmental dynamic time warping (SDTW) approach proposed by @cite_14 . This work holds the same idea as ours in aligning frame-level speaker features, however their alignment is based on local temporal continuity, while ours is based on global phonetic contents.
{ "cite_N": [ "@cite_14" ], "mid": [ "2953269701" ], "abstract": [ "In this paper we present a new method for text-independent speaker verification that combines segmental dynamic time warping (SDTW) and the d-vector approach. The d-vectors, generated from a feed forward deep neural network trained to distinguish between speakers, are used as features to perform alignment and hence calculate the overall distance between the enrolment and test utterances.We present results on the NIST 2008 data set for speaker verification where the proposed method outperforms the conventional i-vector baseline with PLDA scores and outperforms d-vector approach with local distances based on cosine and PLDA scores. Also score combination with the i-vector PLDA baseline leads to significant gains over both methods." ] }
1811.03064
2899622384
Author(s): Yeh, Michael Chin-Chia | Advisor(s): Keogh, Eamonn | Abstract: The last decade has seen a flurry of research on all-pairs-similarity-search (or, self-join) for text, DNA, and a handful of other datatypes, and these systems have been applied to many diverse data mining problems. Surprisingly, however, little progress has been made on addressing this problem for time series subsequences. In this thesis, we have introduced a near universal time series data mining tool called matrix profile which solves the all-pairs-similarity-search problem and caches the output in an easy-to-access fashion. The proposed algorithm is not only parameter-free, exact and scalable, but also applicable for both single and multidimensional time series. By building time series data mining methods on top of matrix profile, many time series data mining tasks (e.g., motif discovery, discord discovery, shapelet discovery, semantic segmentation, and clustering) can be efficiently solved. Because the same matrix profile can be shared by a diverse set of time series data mining methods, matrix profile is versatile and computed-once-use-many-times data structure. We demonstrate the utility of matrix profile for many time series data mining problems, including motif discovery, discord discovery, weakly labeled time series classification, and representation learning on domains as diverse as seismology, entomology, music processing, bioinformatics, human activity monitoring, electrical power-demand monitoring, and medicine. We hope the matrix profile is not the end but the beginning of many more time series data mining projects.
A handful of efforts have considered joins on time series, achieving speedup by (in addition to the use of MapReduce) converting the data to lower-dimensional representations such as PAA @cite_37 or SAX @cite_168 and exploiting lower bounds and or Locality Sensitive Hashing (LSH) to prune some calculations. However, the methods are very complex, with many (10-plus) parameters to adjust. As @cite_37 acknowledge with admirable candor, ". In contrast, our proposed algorithm has zero parameters to set.
{ "cite_N": [ "@cite_168", "@cite_37" ], "mid": [ "1950295237", "1991516927" ], "abstract": [ "In this paper, we focus on high-dimensional similarity join HDSJ using MapReduce paradigm. As the volume of the data and the number of the dimensions increase, the computation cost of HDSJ will increase exponentially. There is no existing effective approach that can process HDSJ efficiently, so we propose a novel method called symbolic aggregate approximation SAX-based HDSJ to deal with the problem. SAX is the abbreviation of symbolic aggregate approximation that is a dimensionality reduction technique and widely used in time series processing, we use SAX to represent the high-dimensional vectors in this paper and reorganize these vectors into groups based on their SAX representations. For the very high-dimensional vectors, we also propose an improved SAX-based HDSJ approach. Finally, we implement SAX-based HDSJ and improved SAX-based HDSJ on Hadoop-0.20.2 and perform comprehensive experiments to test the performance, we also compare SAX-based HDSJ and improved SAX-based HDSJ with the existing method. The experiment results show that our proposed approaches have much better performance than that of the existing method. Copyright © 2015 John Wiley & Sons, Ltd.", "High-dimensional similarity join (HDSJ) is critical for many novel applications in the domain of mobile data management. Nowadays, performing HDSJs efficiently faces two challenges. First, the scale of datasets is increasing rapidly, making parallel computing on a scalable platform a must. Second, the dimensionality of the data can be up to hundreds or even thousands, which brings about the issue of dimensionality curse. In this paper, we address these challenges and study how to perform parallel HDSJs efficiently in the MapReduce paradigm. Particularly, we propose a cost model to demonstrate that it is important to take both communication and computation costs into account as dimensionality and data volume increases. To this end, we propose DAA (Dimension Aggregation Approximation), an efficient compression approach that can help significantly reduce both these costs when performing parallel HDSJs. Moreover, we design DAA-based parallel HDSJ algorithms which can scale up to massive data sizes and very high dimensionality. We perform extensive experiments using both synthetic and real datasets to evaluate the speedup and the scale up of our algorithms." ] }
1811.03064
2899622384
Author(s): Yeh, Michael Chin-Chia | Advisor(s): Keogh, Eamonn | Abstract: The last decade has seen a flurry of research on all-pairs-similarity-search (or, self-join) for text, DNA, and a handful of other datatypes, and these systems have been applied to many diverse data mining problems. Surprisingly, however, little progress has been made on addressing this problem for time series subsequences. In this thesis, we have introduced a near universal time series data mining tool called matrix profile which solves the all-pairs-similarity-search problem and caches the output in an easy-to-access fashion. The proposed algorithm is not only parameter-free, exact and scalable, but also applicable for both single and multidimensional time series. By building time series data mining methods on top of matrix profile, many time series data mining tasks (e.g., motif discovery, discord discovery, shapelet discovery, semantic segmentation, and clustering) can be efficiently solved. Because the same matrix profile can be shared by a diverse set of time series data mining methods, matrix profile is versatile and computed-once-use-many-times data structure. We demonstrate the utility of matrix profile for many time series data mining problems, including motif discovery, discord discovery, weakly labeled time series classification, and representation learning on domains as diverse as seismology, entomology, music processing, bioinformatics, human activity monitoring, electrical power-demand monitoring, and medicine. We hope the matrix profile is not the end but the beginning of many more time series data mining projects.
The work of @cite_176 is the closest in spirit to our work. Their work was the first to note the detrimental impact of irrelevant dimensions on multidimensional motif search, and they introduced a method that is shown to be somewhat robust for a small number of , but irrelevant dimensions, or one noisy irrelevant dimension. However, the algorithm introduced is . Even in an ideal case, with just six dimensions, they report ". The idea is attractive for its simplicity, but it requires all (or at least ) of the dimensions to be relevant, as the algorithm is brittle to even a handful of irrelevant dimensions. ddd Moreover, both the speed and accuracy of Tanaka's algorithm depend on careful tuning of five parameters.
{ "cite_N": [ "@cite_176" ], "mid": [ "2120905266" ], "abstract": [ "Discovering recurring patterns in time series data is a fundamental problem for temporal data mining. This paper addresses the problem of locating subdimensional motifs in real-valued, multivariate time series, which requires the simultaneous discovery of sets of recurring patterns along with the corresponding relevant dimensions. While many approaches to motif discovery have been developed, most are restricted to categorical data, univariate time series, or multivariate data in which the temporal patterns span all of the dimensions. In this paper, we present an expected linear-time algorithm that addresses a generalization of multivariate pattern discovery in which each motif may span only a subset of the dimensions. To validate our algorithm, we discuss its theoretical properties and empirically evaluate it using several data sets including synthetic data and motion capture data collected by an on-body iner- tial sensor." ] }
1811.03064
2899622384
Author(s): Yeh, Michael Chin-Chia | Advisor(s): Keogh, Eamonn | Abstract: The last decade has seen a flurry of research on all-pairs-similarity-search (or, self-join) for text, DNA, and a handful of other datatypes, and these systems have been applied to many diverse data mining problems. Surprisingly, however, little progress has been made on addressing this problem for time series subsequences. In this thesis, we have introduced a near universal time series data mining tool called matrix profile which solves the all-pairs-similarity-search problem and caches the output in an easy-to-access fashion. The proposed algorithm is not only parameter-free, exact and scalable, but also applicable for both single and multidimensional time series. By building time series data mining methods on top of matrix profile, many time series data mining tasks (e.g., motif discovery, discord discovery, shapelet discovery, semantic segmentation, and clustering) can be efficiently solved. Because the same matrix profile can be shared by a diverse set of time series data mining methods, matrix profile is versatile and computed-once-use-many-times data structure. We demonstrate the utility of matrix profile for many time series data mining problems, including motif discovery, discord discovery, weakly labeled time series classification, and representation learning on domains as diverse as seismology, entomology, music processing, bioinformatics, human activity monitoring, electrical power-demand monitoring, and medicine. We hope the matrix profile is not the end but the beginning of many more time series data mining projects.
In a series of papers, Vahdatpour and colleagues introduce an MTS motif discovery tool and apply it to a variety of medical monitoring applications @cite_123 . Their approach is based on computing time series motifs for each individual dimension and using clustering to stitch" together various dimensions. However, even when the motifs are quite obvious, the problems are small and simple, and at most three irrelevant dimensions are considered, they never achieved greater than 85 be sure, this is much better than the 17 given that seven parameters need to be tuned to achieve this result, accuracy is likely to be further compromised in more challenging data sets.
{ "cite_N": [ "@cite_123" ], "mid": [ "2113265222" ], "abstract": [ "This paper addresses the problem of activity and event discovery in multi dimensional time series data by proposing a novel method for locating multi dimensional motifs in time series. While recent work has been done in finding single dimensional and multi dimensional motifs in time series, we address motifs in general case, where the elements of multi dimensional motifs have temporal, length, and frequency variations. The proposed method is validated by synthetic data, and empirical evaluation has been done on several wearable systems that are used by real subjects." ] }
1811.03270
2900413721
We derive upper bounds on the generalization error of learning algorithms based on their : the expected Wasserstein distance between the output hypothesis and the output hypothesis conditioned on an input example. The bounds provide a novel approach to study the generalization of learning algorithms from an optimal transport view and impose less constraints on the loss function, such as sub-gaussian or bounded. We further provide several upper bounds on the algorithmic transport cost in terms of total variation distance, relative entropy (or KL-divergence), and VC dimension, thus further bridging optimal transport theory and information theory with statistical learning theory. Moreover, we also study different conditions for loss functions under which the generalization error of a learning algorithm can be upper bounded by different probability metrics between distributions relating to the output hypothesis and or the input data. Finally, under our established framework, we analyze the generalization in deep learning and conclude that the generalization error in deep neural networks (DNNs) decreases exponentially to zero as the number of layers increases. Our analyses of generalization error in deep learning mainly exploit the hierarchical structure in DNNs and the contraction property of @math -divergence, which may be of independent interest in analyzing other learning models with hierarchical structure.
It is worth mentioning that some works show that these approaches are tightly connected. For example, @cite_36 proves that a higher algorithmic stability implies a smaller hypothesis complexity and @cite_25 analyzes the PAC bayesian bounds for stable learning algorithms. Nevertheless, these approaches are insufficient to explain the generalization of learning models with large hypothesis space, such as deep neural networks ( @cite_40 ). Therefore, it is necessary to find a valid approach that can explain why deep learning is attractive in terms of its generalization properties.
{ "cite_N": [ "@cite_36", "@cite_40", "@cite_25" ], "mid": [ "2592206372", "2950220847", "2808225711" ], "abstract": [ "We introduce a notion of algorithmic stability of learning algorithms---that we term ---that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.", "PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper the PAC-Bayes approach is combined with stability of the hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting is used with a Gaussian prior centered at the expected output. Thus a novelty of our paper is using priors defined in terms of the data-generating distribution. Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients. We also provide a new bound for the SVM classifier, which is compared to other known bounds experimentally. Ours appears to be the first stability-based bound that evaluates to non-trivial values." ] }