aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1612.04732
2575172791
We present a family of neural-network--inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data.
Distributed word representations have been used recently for tackling various tasks such as language modeling @cite_32 @cite_8 , paraphrase detection @cite_20 , sentiment analysis @cite_1 , syntactic parsing @cite_5 @cite_39 @cite_21 @cite_11 , and a multitude of cross-lingual tasks @cite_47 .
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_1", "@cite_32", "@cite_39", "@cite_5", "@cite_47", "@cite_20", "@cite_11" ], "mid": [ "1999965501", "", "71795751", "2091812280", "342285082", "", "", "2103305545", "2250414191" ], "abstract": [ "Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.", "", "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models.", "The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.", "", "", "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.", "This work focuses on the task of finding latent vector representations of the words in a corpus. In particular, we address the issue of what to do when there are multiple languages in the corpus. Prior work has, among other techniques, used canonical correlation analysis to project pre-trained vectors in two languages into a common space. We propose a simple and scalable method that is inspired by the notion that the learned vector representations should be invariant to translation between languages. We show empirically that our method outperforms prior work on multilingual tasks, matches the performance of prior work on monolingual tasks, and scales linearly with the size of the input data (and thus the number of languages being embedded)." ] }
1612.04732
2575172791
We present a family of neural-network--inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data.
The introduction of the CBOW and SkipGram embedding models @cite_4 has boosted this research direction. These models are simple and easy to implement, and can be trained orders of magnitude faster than previous models. Subsequent research has proposed models that take advantage of additional textual information, such as syntactic dependencies @cite_38 @cite_15 , global statistics @cite_44 , or parallel data @cite_30 @cite_36 @cite_0 @cite_28 . Prior knowledge can also be incorporated to achieve improved lexical embeddings by modifying the objective function while allowing for the exploitation of existing resources such as WordNet @cite_19 , or by modifying the model architecture while targeting specific tasks @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_4", "@cite_36", "@cite_28", "@cite_44", "@cite_0", "@cite_19", "@cite_15", "@cite_12" ], "mid": [ "2126725946", "2251771443", "1614298861", "2952037945", "", "2250539671", "2251765408", "2250930514", "2251874715", "2296194829" ], "abstract": [ "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.", "While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings.", "", "We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.", "", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "Recent work in learning bilingual representations tend to tailor towards achieving good performance on bilingual tasks, most often the crosslingual document classification (CLDC) evaluation, but to the detriment of preserving clustering structures of word representations monolingually. In this work, we propose a joint model to learn word representations from scratch that utilizes both the context coocurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint. Specifically, we extend the recently popular skipgram model to learn high quality bilingual representations efficiently. Our learned embeddings achieve a new state-of-the-art accuracy of 80.3 for the German to English CLDC task and a highly competitive performance of 90.7 for the other classification direction. At the same time, our models outperform best embeddings from past bilingual representation work by a large margin in the monolingual word similarity evaluation. 1", "Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements.", "Word representations have proven useful for many NLP tasks, e.g., Brown clusters as features in dependency parsing (, 2008). In this paper, we investigate the use of continuous word representations as features for dependency parsing. We compare several popular embeddings to Brown clusters, via multiple types of features, in both news and web domains. We find that all embeddings yield significant parsing gains, including some recent ones that can be trained in a fraction of the time of others. Explicitly tailoring the representations for the task leads to further improvements. Moreover, an ensemble of all representations achieves the best results, suggesting their complementarity.", "We present two simple modifications to the models in the popular Word2Vec tool, in order to generate embeddings more suited to tasks involving syntax. The main issue with the original models is the fact that they are insensitive to word order. While order independence is useful for inducing semantic representations, this leads to suboptimal results when they are used to solve syntax-based problems. We show improvements in part-ofspeech tagging and dependency parsing using our proposed models." ] }
1612.04732
2575172791
We present a family of neural-network--inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data.
Our paper describes a mechanism which unifies the way context signals from the training data are exploited. It exploits the information available in parallel data in an on-line training fashion (bi multi-lingual training), compared to the off-line matrix transformation proposal of . The BilBOWA model @cite_36 uses a parallel bag-of-word representation for the parallel data, while the BiSkip model @cite_0 achieves bilingual training by exploiting word alignments. Some of these proposals can be formulated as particular instances under our multigraph framework. For instance, the context window (with window size @math ) @math of the SkipGram model is equivalent to model @math in the multigraph formulation. The dependency-based embedding model of Levy and Goldberg is equivalent to @math when word and context vocabulary are the same We also ignore the collapse of preposition-based dependencies, which makes their model be between @math and @math . . The BiSkip model @cite_0 with a window of size @math is equivalent to model @math in our multigraph formulation.
{ "cite_N": [ "@cite_36", "@cite_0" ], "mid": [ "2952037945", "2251765408" ], "abstract": [ "We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.", "Recent work in learning bilingual representations tend to tailor towards achieving good performance on bilingual tasks, most often the crosslingual document classification (CLDC) evaluation, but to the detriment of preserving clustering structures of word representations monolingually. In this work, we propose a joint model to learn word representations from scratch that utilizes both the context coocurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint. Specifically, we extend the recently popular skipgram model to learn high quality bilingual representations efficiently. Our learned embeddings achieve a new state-of-the-art accuracy of 80.3 for the German to English CLDC task and a highly competitive performance of 90.7 for the other classification direction. At the same time, our models outperform best embeddings from past bilingual representation work by a large margin in the monolingual word similarity evaluation. 1" ] }
1612.04732
2575172791
We present a family of neural-network--inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data.
A related approach for inducing multilingual embeddings is based on neural networks for automatic translation, either in conjunction with a phrase-table @cite_6 @cite_31 , or a fully neural approach @cite_24 @cite_10 @cite_14 @cite_17 . These approaches use signals similar to ours when exploiting parallel training data, but the resulting embeddings are optimized for translation accuracy (according to the loss-function definition of these models, usually using a maximum-likelihood objective @cite_24 @cite_10 or a reinforcement-learning--inspired objective @cite_14 ). In addition, they do not directly allow for the exploitation of both parallel and monolingual data simultaneously at train time, or the exploitation of additional sources of linguistic information (such as syntactic dependencies).
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_24", "@cite_31", "@cite_10", "@cite_17" ], "mid": [ "2525778437", "2251682575", "2949888546", "2108967571", "", "2963991316" ], "abstract": [ "Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60 compared to Google's phrase-based production system.", "Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "We present a three-pronged approach to improving Statistical Machine Translation (SMT), building on recent success in the application of neural networks to SMT. First, we propose new features based on neural networks to model various non-local translation phenomena. Second, we augment the architecture of the neural network with tensor layers that capture important higher-order interaction among the network units. Third, we apply multitask learning to estimate the neural network parameters jointly. Each of our proposed methods results in significant improvements that are complementary. The overall improvement is +2.7 and +1.8 BLEU points for Arabic-English and Chinese-English translation over a state-of-the-art system that already includes neural network features.", "", "Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task." ] }
1612.04732
2575172791
We present a family of neural-network--inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data.
Because our approach enables us to exploit both monolingual and parallel data simultaneously, the resulting distributed representation can be successfully used to learn translations for terms that appear in the monolingual data only (). This represents the neural word-embedding equivalent of a long line of research based on word-surface patterns, starting with earlier attempts @cite_42 , and continuing with @cite_23 @cite_35 , and complemented by approaches based on probabilistic models @cite_26 . Our approach has the advantages of achieving this effect in a completely unsupervised fashion, without exploiting surface patterns, and benefiting from the smoothness properties associated with continuous word representations.
{ "cite_N": [ "@cite_35", "@cite_42", "@cite_23", "@cite_26" ], "mid": [ "2084277454", "2139812240", "", "2140406733" ], "abstract": [ "This paper presents novel improvements to the induction of translation lexicons from monolingual corpora using multilingual dependency parses. We introduce a dependency-based context model that incorporates long-range dependencies, variable context sizes, and reordering. It provides a 16 relative improvement over the baseline approach that uses a fixed context window of adjacent words. Its Top 10 accuracy for noun translation is higher than that of a statistical translation model trained on a Spanish-English parallel corpus containing 100,000 sentence pairs. We generalize the evaluation to other word-types, and show that the performance can be increased to 18 relative by preserving part-of-speech equivalencies during translation.", "Common algorithms for sentence and word-alignment allow the automatic identification of word translations from parallel texts. This study suggests that the identification of word translations should also be possible with non-parallel and even unrelated texts. The method proposed is based on the assumption that there is a correlation between the patterns of word co-occurrences in texts of different languages.", "", "We present a method for learning bilingual translation lexicons from monolingual corpora. Word types in each language are characterized by purely monolingual features, such as context counts and orthographic substrings. Translations are induced using a generative model based on canonical correlation analysis, which explains the monolingual lexicons in terms of latent matchings. We show that high-precision lexicons can be learned in a variety of language pairs and from a range of corpus types." ] }
1612.04683
2584402503
In this paper, we report on domain clustering in the ambit of an adaptive MT architecture. A standard bottom-up hierarchical clustering algorithm has been instantiated with five different distances, which have been compared, on an MT benchmark built on 40 commercial domains, in terms of dendrograms, intrinsic and extrinsic evaluations. The main outcome is that the most expensive distance is also the only one able to allow the MT engine to guarantee good performance even with few, but highly populated clusters of domains.
The specialisation is obtained by training models on text from the specific domain. Specialised texts can be gathered either by exploiting supervision or through automatic selection from general texts. Deciding if a sentence belongs to a given domain can be done by checking how well it is predicted by a domain specific language model (through the perplexity), by the tf idf method commonly employed in information retrieval, or by the cross-entropy difference method presented in @cite_4 @cite_8 .
{ "cite_N": [ "@cite_4", "@cite_8" ], "mid": [ "2117278770", "1905522558" ], "abstract": [ "We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.", "We explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large general-domain parallel corpus that are most relevant to the target domain. These sentences may be selected with simple cross-entropy based methods, of which we present three. As these sentences are not themselves identical to the in-domain data, we call them pseudo in-domain subcorpora. These subcorpora -- 1 the size of the original -- can then used to train small domain-adapted Statistical Machine Translation (SMT) systems which outperform systems trained on the entire corpus. Performance is further improved when we use these domain-adapted models in combination with a true in-domain model. The results show that more training data is not always better, and that best results are attained via proper domain-relevant data selection, as well as combining in- and general-domain systems during decoding." ] }
1612.04683
2584402503
In this paper, we report on domain clustering in the ambit of an adaptive MT architecture. A standard bottom-up hierarchical clustering algorithm has been instantiated with five different distances, which have been compared, on an MT benchmark built on 40 commercial domains, in terms of dendrograms, intrinsic and extrinsic evaluations. The main outcome is that the most expensive distance is also the only one able to allow the MT engine to guarantee good performance even with few, but highly populated clusters of domains.
Another way to exploit multiple domain-specific models is the mixture-model approach which combines the various models, properly weighting each of them @cite_1 . Typically, the combination is realised as a linear or log-linear model and can involve language, translation and even alignment models. The interpolation weights can be estimated off-line on a development set or on the source side of the whole test set. On-line estimation is also feasible on the current sentence (or bunch of sentences) to translate @cite_7 ; in the ambit of CAT and interactive MT, the availability of user corrections allows a promptly and really effective adaptation of the weights @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_7" ], "mid": [ "2113549939", "2132001515", "2054399842" ], "abstract": [ "We present a novel online learning approach for statistical machine translation tailored to the computer assisted translation scenario. With the introduction of a simple online feature, we are able to adapt the translation model on the fly to the corrections made by the translators. Additionally, we do online adaption of the feature weights with a large margin algorithm. Our results show that our online adaptation technique outperforms the static phrase based statistical machine translation system by 6 BLEU points absolute, and a standard incremental adaptation approach by 2 BLEU points absolute.", "We describe a mixture-model approach to adapting a Statistical Machine Translation System for new domains, using weights that depend on text distances to mixture components. We investigate a number of variants on this approach, including cross-domain versus dynamic adaptation; linear versus loglinear mixtures; language and translation model adaptation; different methods of assigning weights; and granularity of the source unit being adapted to. The best methods achieve gains of approximately one BLEU percentage point over a state-of-the art non-adapted baseline system.", "This paper presents a technique for class-dependent decoding for statistical machine translation (SMT). The approach differs from previous methods of class-dependent translation in that the class-dependent forms of all models are integrated directly into the decoding process. We employ probabilistic mixture weights between models that can change dynamically on a segment-by-segment basis depending on the characteristics of the source segment. The effectiveness of this approach is demonstrated by evaluating its performance on travel conversation data. We used the approach to tackle the translation of questions and declarative sentences using class-dependent models. To achieve this, our system integrated two sets of models specifically built to deal with sentences that fall into one of two classes of dialog sentence: questions and declarations, with a third set of models built to handle the general class. The technique was thoroughly evaluated on data from 17 language pairs using 6 machine translation evaluation metrics. We found the results were corpus-dependent, but in most cases our system was able to improve translation performance, and for some languages the improvements were substantial." ] }
1612.04683
2584402503
In this paper, we report on domain clustering in the ambit of an adaptive MT architecture. A standard bottom-up hierarchical clustering algorithm has been instantiated with five different distances, which have been compared, on an MT benchmark built on 40 commercial domains, in terms of dendrograms, intrinsic and extrinsic evaluations. The main outcome is that the most expensive distance is also the only one able to allow the MT engine to guarantee good performance even with few, but highly populated clusters of domains.
If proper meta-information is available, a supervised partitioning of the training data into domains is allowed. Unfortunately, that is a rather rare case. More commonly, unsupervised clustering is needed, as in @cite_3 , where Self-Organizing Map is used to create auxiliary language models, the most appropriate of which is selected on-the-fly for each document to translate.
{ "cite_N": [ "@cite_3" ], "mid": [ "2272542438" ], "abstract": [ "Domain Adaptation in Machine Translation means to take a machine translation system that is restricted to work in a specific context and to enable the system to translate text from a different domain. The paper presents a two-step domain adaptation strategy, by first making use of unlabeled training material through an unsupervised algorithm, the Self-Organizing Map, to create auxiliary language models, and then to include these models dynamically in a machine translation pipeline." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Local descriptors, such as SIFT and HOG, are extracted in the feature extraction phase . Next, several encoding schemes can be considered . The work of @cite_38 investigated soft assignment of local features to visual words. @cite_0 introduced super-vector coding that performs a non-linear mapping of each local feature descriptor to construct a high-dimensional sparse vector. The Improved Fisher Vectors, introduced by @cite_19 , encode local descriptors as gradients with respect to a generative model of image formation (usually a Gaussian mixture model (GMM) over local descriptors which serves as a visual vocabulary for Fisher vector coding).
{ "cite_N": [ "@cite_0", "@cite_38", "@cite_19" ], "mid": [ "1551774182", "2155490028", "1606858007" ], "abstract": [ "This paper introduces a new framework for image classification using local visual descriptors. The pipeline first performs a non-linear feature transformation on descriptors, then aggregates the results together to form image-level representations, and finally applies a classification model. For all the three steps we suggest novel solutions which make our approach appealing in theory, more scalable in computation, and transparent in classification. Our experiments demonstrate that the proposed classification method achieves state-of-the-art accuracy on the well-known PASCAL benchmarks.", "This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007 2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.", "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
It has been shown that intermediate, hidden activations of fully connected layers in trained deep network are general-purpose features applicable to visual recognition tasks . Several recent methods have shown superior performance using convolutional layer activations instead of fully-connected ones. These convolutional layers are discriminative, semantically meaningful and mitigate the need to use a fixed input image size. @cite_20 proposed a multi-scale orderless pooling (MOP) approach by constructing descriptors from the fully connected (FC) layer of the network. The descriptors are extracted from densely sampled square image windows. The descriptors are then pooled using the VLAD encoding scheme to obtain final image representation.
{ "cite_N": [ "@cite_20" ], "mid": [ "1524680991" ], "abstract": [ "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
In contrast to MOP , @cite_18 showed how deep convolutional features (i.e. dense local features extracted at multiple scales from the convolutional layers of CNNs) can be exploited within a BOW pipeline. In their approach, a Fisher Vector encoding scheme is used to obtain the final image representation. We will refer to this type of image representation as a , and in this work we will apply various scale coding approaches to a bag of deep features. Though FV-CNN employs multi-scale convolutional features, the descriptors are pooled into a single Fisher Vector representation. This implies that the final image representation is scale-invariant since all the scales are pooled into a single feature vector. We argue that such a representation is sub-optimal for the problem of human attribute and action recognition and propose to explicitly incorporate multi-scale information in the final image representation.
{ "cite_N": [ "@cite_18" ], "mid": [ "1909952827" ], "abstract": [ "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8 accuracy on Flickr material dataset and 81 accuracy on MIT indoor scenes, providing absolute gains of more than 10 over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Action recognition in still images. Recognizing actions in still images is a difficult problem that has gained a lot of attention recently . In action recognition, bounding box information of each person instance is provided both at train and test time. The task is to associate an action category label with each person bounding box. Several approaches have addressed the problem of action recognition by finding human-object interactions in an image . A poselet-based approach was proposed in @cite_17 where poselet activation vectors capture the pose of a person. @cite_31 proposed a human-centric approach that localizes humans and objects associated with an action. @cite_24 propose to learn a set of sparse attribute and part bases for action recognition in still images. Recently, a comprehensive survey was performed by @cite_27 on action recognition methods exploiting semantic information. In their survey, it was shown that methods exploiting semantic information yield superior performance compared to their non-semantic counterparts in many scenarios. Human action recognition in still images is also discussed within the context of fuzzy domain in a recent survey .
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_31", "@cite_17" ], "mid": [ "2038765747", "2038746778", "2129947832", "2169852119" ], "abstract": [ "In this work, we propose to use attributes and parts for recognizing human actions in still images. We define action attributes as the verbs that describe the properties of human actions, while the parts of actions are objects and poselets that are closely related to the actions. We jointly model the attributes and parts by learning a set of sparse bases that are shown to carry much semantic meaning. Then, the attributes and parts of an action image can be reconstructed from sparse coefficients with respect to the learned bases. This dual sparsity provides theoretical guarantee of our bases learning and feature reconstruction approach. On the PASCAL action dataset and a new “Stanford 40 Actions” dataset, we show that our method extracts meaningful high-order interactions between attributes and parts in human actions while achieving state-of-the-art classification performance.", "Abstract This paper presents an overview of state-of-the-art methods in activity recognition using semantic features. Unlike low-level features, semantic features describe inherent characteristics of activities. Therefore, semantics make the recognition task more reliable especially when the same actions look visually different due to the variety of action executions. We define a semantic space including the most popular semantic features of an action namely the human body (pose and poselet), attributes, related objects, and scene context. We present methods exploiting these semantic features to recognize activities from still images and video data as well as four groups of activities: atomic actions, people interactions, human–object interactions, and group activities. Furthermore, we provide potential applications of semantic approaches along with directions for future research.", "We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the human-object interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2], and a new human-object interaction data set.", "We present a distributed representation of pose and appearance of people called the “poselet activation vector”. First we show that this representation can be used to estimate the pose of people defined by the 3D orientations of the head and torso in the challenging PASCAL VOC 2010 person detection dataset. Our method is robust to clutter, aspect and viewpoint variation and works even when body parts like faces and limbs are occluded or hard to localize. We combine this representation with other sources of information like interaction with objects and other people in the image and use it for action recognition. We report competitive results on the PASCAL VOC 2010 static image action classification challenge." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Other approaches to action recognition employ BOW-based image representations . @cite_2 proposed the use of discriminative spatial saliency for action recognition by employing a max margin classifier. A comprehensive evaluation of color descriptors and color-shape fusion approaches was performed by @cite_6 for action recognition. @cite_21 proposed pose-normalized semantic pyramids employing pre-trained body part detectors. A comprehensive survey was performed by @cite_33 where existing action recognition methods are categorized based on high-level cues and low-level features.
{ "cite_N": [ "@cite_21", "@cite_33", "@cite_6", "@cite_2" ], "mid": [ "2044291182", "1993991024", "2139132812", "2129933262" ], "abstract": [ "Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.", "Abstract Recently still image-based human action recognition has become an active research topic in computer vision and pattern recognition. It focuses on identifying a person׳s action or behavior from a single image. Unlike the traditional action recognition approaches where videos or image sequences are used, a still image contains no temporal information for action characterization. Thus the prevailing spatiotemporal features for video-based action analysis are not appropriate for still image-based action recognition. It is more challenging to perform still image-based action recognition than the video-based one, given the limited source of information as well as the cluttered background for images collected from the Internet. On the other hand, a large number of still images exist over the Internet. Therefore it is demanding to develop robust and efficient methods for still image-based action recognition to understand the web images better for image retrieval or search. Based on the emerging research in recent years, it is time to review the existing approaches to still image-based action recognition and inspire more efforts to advance the field of research. We present a detailed overview of the state-of-the-art methods for still image-based action recognition, and categorize and describe various high-level cues and low-level features for action analysis in still images. All related databases are introduced with details. Finally, we give our views and thoughts for future research.", "In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color---shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.", "In many visual classification tasks the spatial distribution of discriminative information is (i) non uniform e.g. person ‘reading’ can be distinguished from ‘taking a photo’ based on the area around the arms i.e. ignoring the legs and (ii) has intra class variations e.g. different readers may hold the books differently. Motivated by these observations, we propose to learn the discriminative spatial saliency of images while simultaneously learning a max margin classifier for a given visual classification task. Using the saliency maps to weight the corresponding visual features improves the discriminative power of the image representation. We treat the saliency maps as latent variables and allow them to adapt to the image content to maximize the classification score, while regularizing the change in the saliency maps. Our experimental results on three challenging datasets, for (i) human action classification, (ii) fine grained classification and (iii) scene classification, demonstrate the effectiveness and wide applicability of the method." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Recently, image representations based on deep features have achieved superior performance for action recognition . @cite_28 proposed mid-level image representations using pre-trained CNNs for image classification and action recognition. The work of @cite_12 proposed learning deep features jointly for action classification and detection. @cite_39 proposed regularized max pooling and extract features at multiple deformable sub-windows. The aforementioned approaches employ deep features extracted from activations of the fully connected layers of the deep CNNs. In contrast, we use dense local features from the convolutional layers of networks for image description.
{ "cite_N": [ "@cite_28", "@cite_12", "@cite_39" ], "mid": [ "2161381512", "259216465", "2167146493" ], "abstract": [ "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "We present convolutional neural networks for the tasks of keypoint (pose) prediction and action classification of people in unconstrained images. Our approach involves training an R-CNN detector with loss functions depending on the task being tackled. We evaluate our method on the challenging PASCAL VOC dataset and compare it to previous leading approaches. Our method gives state-of-the-art results for keypoint and action prediction. Additionally, we introduce a new dataset for action detection, the task of simultaneously localizing people and classifying their actions, and present results using our approach.", "We propose Regularized Max Pooling (RMP) for image classification. RMP classifiesan image (oran image region) by extracting featurevectors at multiple subwindows at multiple locations and scales. Unlike Spatial Pyramid Matching where the subwindows are defined purely based on geometric correspondence, RMP accounts for the deformation of discriminative parts. The amount of deformation and the discriminative ability for multiple parts are jointly learned during training. RMP outperforms the state-of-the-art performance by a wide margin on the challenging PASCAL VOC2012 dataset for human action recognition on still images." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
The incorporation of scale information has been investigated in the context of action recognition in videos . The work of @cite_30 proposes to construct multiple dictionaries at different resolutions in a final video representation. The work of @cite_29 proposes multi-scale spatio-temporal concatenation of local features resulting in a set of natural action structures. Both these methods do not consider relative scale coding. In addition, our approach is based on recent advancements of deep convolutional neural networks (CNNs) and Fisher vector encoding scheme. We re-visit the problem of incorporating scale information for the popular CNNs based deep features. To the best of our knowledge, we are the first to investigate and propose scale coded bag-of-deep feature representations applicable for both human attribute and action recognition in still images.
{ "cite_N": [ "@cite_30", "@cite_29" ], "mid": [ "2048931426", "2072477074" ], "abstract": [ "Human action recognition in video is important in many computer vision applications such as automated surveillance. Human actions can be compactly encoded using a sparse set of local spatio-temporal salient features at different scales. The existing bottom-up methods construct a single dictionary of action primitives from the joint features of all scales and hence, a single action representation. This representation cannot fully exploit the complementary characteristics of the motions across different scales. To address this problem, we introduce the concept of learning multiple dictionaries of action primitives at different resolutions and consequently, multiple scale-specific representations for a given video sample. Using a decoupled fusion of multiple representations, we improved the human classification accuracy of realistic benchmark databases by about 5 , compared with the state-of-the art methods.", "Human and many other animals can detect, recognize, and classify natural actions in a very short time. How this is achieved by the visual system and how to make machines understand natural actions have been the focus of neurobiological studies and computational modeling in the last several decades. A key issue is what spatial-temporal features should be encoded and what the characteristics of their occurrences are in natural actions. Current global encoding schemes depend heavily on segmenting while local encoding schemes lack descriptive power. Here, we propose natural action structures, i.e., multi-size, multi-scale, spatial-temporal concatenations of local features, as the basic features for representing natural actions. In this concept, any action is a spatial-temporal concatenation of a set of natural action structures, which convey a full range of information about natural actions. We took several steps to extract these structures. First, we sampled a large number of sequences of patches at multiple spatial-temporal scales. Second, we performed independent component analysis on the patch sequences and classified the independent components into clusters. Finally, we compiled a large set of natural action structures, with each corresponding to a unique combination of the clusters at the selected spatial-temporal scales. To classify human actions, we used a set of informative natural action structures as inputs to two widely used models. We found that the natural action structures obtained here achieved a significantly better recognition performance than low-level features and that the performance was better than or comparable to the best current models. We also found that the classification performance with natural action structures as features was slightly affected by changes of scale and artificially added noise. We concluded that the natural action structures proposed here can be used as the basic encoding units of actions and may hold the key to natural action understanding." ] }
1612.04884
2580164915
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Human attribute recognition. Recognizing human attributes such as, age, gender and clothing style is an active research problem with many real-world applications. State-of-the-art approaches employ part-based representations to counter the problem of pose normalization. @cite_3 proposed semantic part detection using poselets and constructing pose-normalized representations. Their approach employs HOGs for part descriptions. Later, @cite_7 extended the approach of @cite_3 by replacing the HOG features with CNNs. @cite_21 proposed pre-trained body part detectors to automatically construct pose normalized semantic pyramid representations.
{ "cite_N": [ "@cite_21", "@cite_7", "@cite_3" ], "mid": [ "2044291182", "2147414309", "2128560777" ], "abstract": [ "Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.", "We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.", "We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system." ] }
1612.04897
2585022212
A dynamic Boltzmann machine (DyBM) has been proposed as a model of a spiking neural network, and its learning rule of maximizing the log-likelihood of given time-series has been shown to exhibit key properties of spike-timing dependent plasticity (STDP), which had been postulated and experimentally confirmed in the field of neuroscience as a learning rule that refines the Hebbian rule. Here, we relax some of the constraints in the DyBM in a way that it becomes more suitable for computation and learning. We show that learning the DyBM can be considered as logistic regression for binary-valued time-series. We also show how the DyBM can learn real-valued data in the form of a Gaussian DyBM and discuss its relation to the vector autoregressive (VAR) model. The Gaussian DyBM extends the VAR by using additional explanatory variables, which correspond to the eligibility traces of the DyBM and capture long term dependency of the time-series. Numerical experiments show that the Gaussian DyBM significantly improves the predictive accuracy over VAR.
There has been a significant amount of the prior work towards understanding STDP from the perspectives of machine learning @cite_0 @cite_3 @cite_9 . For example, show that STDP can be understood as approximating the expectation maximization (EM) algorithm @cite_0 . study a particularly structured (winner-take-all) network and its learning rule for maximizing the log likelihood of given static patterns. On the other hand, the DyBM and the Gaussian DyBM do not assume particular structures in the network, and the learning rule having the properties of STDP applies for any synapse in the network. Also, the learning rule of the DyBM and the Gaussian DyBM maximizes the log likelihood of given time-series, and its learning rule does not involve approximations beyond what is assumed in stochastic gradient methods.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_3" ], "mid": [ "2003400208", "2527798464", "2308200533" ], "abstract": [ "The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.", "We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point, or stationary distribution) towards a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged towards their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal 'back-propagated' during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2 and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task.", "We introduce a weight update formula that is expressed only in terms of firing rates and their derivatives and that results in changes consistent with those associated with spike-timing dependent plasticity (STDP) rules and biological observations, even though the explicit timing of spikes is not needed. The new rule changes a synaptic weight in proportion to the product of the presynaptic firing rate and the temporal rate of change of activity on the postsynaptic side. These quantities are interesting for studying theoretical explanation for synaptic changes from a machine learning perspective. In particular, if neural dynamics moved neural activity towards reducing some objective function, then this STDP rule would correspond to stochastic gradient descent on that objective function." ] }
1612.04944
2579252985
Maintaining an up-to-date global network view is of crucial importance for intelligent SDN applications that need to act autonomously. In this paper, we focus on two key factors that can affect the controllers' global network view and subsequently impact the application performance. Particularly we examine: (1) network state collection, and (2) network state distribution. First, we compare the impact of active and passive OpenFlow network state collection methods on an SDN load-balancing application running at the controller using key performance indicators that we define. We do this comparison through: (i) a simulation of a mathematical model we derive for the SDN load-balancer, and (ii) an evaluation of a load-balancing application running on top of single and distributed controllers. Further, we investigate the impact of network state collection on a state-distribution-aware load-balancing application. Finally, we study the impact of network scale on applications requiring an up-to-date global network view in the presence of the aforementioned key factors. Our results show that both the network state collection and network state distribution can have an impact on the SDN application performance. The more the information at the controllers becomes outdated, the higher the impact would be. Even with those applications that were designed to mitigate the issues of controller state distribution, their performance was affected by the network state collection. Lastly, the results suggest that the impact of network state collection on application performance becomes more apparent as the number of distributed controllers increases.
The authors of the FlowVisor @cite_19 proejct brought the concept of hypervisor-like network virtualization into SDN. It is an OpenFlow proxy sitting between the control and data planes, which acts as a network virtualization layer that lies between the network devices and control applications, allowing the use of multiple OpenFlow controllers at the same time (one controller per slice) in order to ensure that slices are isolated from one another.
{ "cite_N": [ "@cite_19" ], "mid": [ "2186961980" ], "abstract": [ "Network virtualization has long been a goal of of the network research community. With it, multiple isolated logical networks each with potentially different addressing and forwarding mechanisms can share the same physical infrastructure. Typically this is achieved by taking advantage of the flexibility of software (e.g. [20, 23]) or by duplicating components in (often specialized) hardware[19]. In this paper we present a new approach to switch virtualization in which the same hardware forwarding plane can be shared among multiple logical networks, each with distinct forwarding logic. We use this switch-level virtualization to build a research platform which allows multiple network experiments to run side-by-side with production traffic while still providing isolation and hardware forwarding speeds. We also show that this approach is compatible with commodity switching chipsets and does not require the use of programmable hardware such as FPGAs or network processors. We build and deploy this virtualization platform on our own production network and demonstrate its use in practice by running five experiments simultaneously within a campus network. Further, we quantify the overhead of our approach and evaluate the completeness of the isolation between virtual slices." ] }
1612.04944
2579252985
Maintaining an up-to-date global network view is of crucial importance for intelligent SDN applications that need to act autonomously. In this paper, we focus on two key factors that can affect the controllers' global network view and subsequently impact the application performance. Particularly we examine: (1) network state collection, and (2) network state distribution. First, we compare the impact of active and passive OpenFlow network state collection methods on an SDN load-balancing application running at the controller using key performance indicators that we define. We do this comparison through: (i) a simulation of a mathematical model we derive for the SDN load-balancer, and (ii) an evaluation of a load-balancing application running on top of single and distributed controllers. Further, we investigate the impact of network state collection on a state-distribution-aware load-balancing application. Finally, we study the impact of network scale on applications requiring an up-to-date global network view in the presence of the aforementioned key factors. Our results show that both the network state collection and network state distribution can have an impact on the SDN application performance. The more the information at the controllers becomes outdated, the higher the impact would be. Even with those applications that were designed to mitigate the issues of controller state distribution, their performance was affected by the network state collection. Lastly, the results suggest that the impact of network state collection on application performance becomes more apparent as the number of distributed controllers increases.
Onix @cite_16 provides an API for implementing applications that run on-top of distributed SDN controllers. The authors of Onix, realized that applications often have different requirements for the consistency of the network state they manage. Hence, Onix gives applications the flexibility to make their own trade-offs between consistency and scalability. This trade-off flexibility is achieved by providing the control applications with two storage subsystems that employ two different consistency models: (1) a strongly consistent transactional Database (DB), and (2) an eventually consistent memory-based single-hop Distributed Hash Table (DHT).
{ "cite_N": [ "@cite_16" ], "mid": [ "2798915702" ], "abstract": [ "Computer networks lack a general control paradigm, as traditional networks do not provide any network-wide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability." ] }
1612.04557
2615529094
We consider polling models in the sense of Takagi (MIT Press, 1986). In our case, the feature of the server is that it may be forced to wait idly for new messages at an empty queue instead of switching to the next station. We propose four different wait-and-see strategies that govern these waiting periods. We assume Poisson arrivals for new messages and allow general service and switchover time distributions. The results are formulas for the mean average queueing delay and characterisations of the cases where the wait-and-see strategies yield a lower delay compared to the exhaustive strategy.
@cite_16 , focus on a two-queue polling model with a timer as in Strategy IV at station @math which may be random. They examine different configurations: Either both stations are served exhaustively, or one station is controlled by the @math -limited protocol whereas the other station is served in an exhaustive fashion. The main results are the probability generating function of the queue lengths, expressions for pseudo-conservation laws, and the Laplace transform of the stationary waiting times.
{ "cite_N": [ "@cite_16" ], "mid": [ "2099066336" ], "abstract": [ "We consider two-queue polling models with the special feature that a timer mechanism is employed at Q1: whenever the server polls Q1 and finds it empty, it activates a timer and remains dormant, waiting for the first arrival. If such an arrival occurs before the timer expires, a busy period starts in accordance with Q1's service discipline. However, if the timer is shorter than the interarrival time to Q1, the server does not wait any more and switches back to Q2. We consider three configurations: (i) Q1 is controlled by the 1-limited protocol while Q2 is served exhaustively, (ii) Q1 employs the exhaustive regime while Q2 follows the 1-limited procedure, and (iii) both queues are served exhaustively. In all cases, we assume Poisson arrivals and allow general service and switchover time distributions. Our main results include the queue length distributions at polling instants, the waiting time distributions and the distribution of the total workload in the system." ] }
1612.04557
2615529094
We consider polling models in the sense of Takagi (MIT Press, 1986). In our case, the feature of the server is that it may be forced to wait idly for new messages at an empty queue instead of switching to the next station. We propose four different wait-and-see strategies that govern these waiting periods. We assume Poisson arrivals for new messages and allow general service and switchover time distributions. The results are formulas for the mean average queueing delay and characterisations of the cases where the wait-and-see strategies yield a lower delay compared to the exhaustive strategy.
Besides the main references @cite_14 and @cite_16 , further papers deal with service strategies which have in common that the server does not necessarily switch to the next station when the current queue is empty. Polling models with deterministic sojourn times and preemptive service are considered in @cite_15 and with exponentially distributed sojourn times in @cite_0 . Similar to Strategy IV, in the setting of @cite_10 the server waits exactly for the first arriving message at an empty station. @cite_1 @cite_2 @cite_5 , forced idle times are examined where the server is not allowed to resume service immediately as soon as a new message arrives during these idle periods.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2105518356", "1976263021", "2121522469", "2414390159", "1974879209", "", "2099066336", "" ], "abstract": [ "", "Sarkar and Zangwill (1991) showed by numerical examples that reduction in setup times can, surprisingly, actually increase work in process in some cyclic production systems (that is, reduction in switchover times can increase waiting times in some polling models). We present, for polling models with exhaustive and gated service disciplines, some explicit formulas that provide additional insight and characterization of this anomaly. More specifically, we show that, for both of these models, there exist simple formulas that define for each queue a critical value z* of the mean total setup time z per cycle such that, if z < z*, then the expected waiting time at that queue will be minimized if the server is forced to idle for a constant length of time z*- z every cycle; also, for the symmetric polling model, we give a simple explicit formula for the expected waiting time and the critical value z* that minimizes it.", "This paper considers polling systems with an autonomous server that remains at a queue for an exponential amount of time before moving to a next queue incurring a generally distributed switch-over time. The server remains at a queue until the exponential visit time expires, also when the queue becomes empty. If the queue is not empty when the visit time expires, service is preempted upon server departure, and repeated when the server returns to the queue. The paper first presents a necessary and sufficient condition for stability, and subsequently analyzes the joint queue-length distribution via an embedded Markov chain approach. As the autonomous exponential visit times may seem to result in a system that closely resembles a system of independent queues, we explicitly investigate the approximation of our system via a system of independent vacation queues. This approximation is accurate for short visit times only.", "A recent interesting paper Cooper, Niu, and Srinivasan @2 #! shows how for some cyclic production systems reducing setup times can surprisingly increase work in process+ There the authors show how in these situations the introduction of forced idle time can be used to optimize system performance+ Here we show that introducing forced idle time at different points during the production cycle can further improve performance in these situations and also in situations where the suggestions from @2# yield no improvement+", "In the early 1990s, research began to show that the Japanese production theory, which espouses reduction of machine setup time as a sure way to improve production performance, may be limited. Specifically, it was found that reduction in mean setup times without any change in variance can, paradoxically, increase waiting time and work in process (WIP) in a cyclic production system. Setup time variance was demonstrated to play a central role because of the paradoxes it produced with the resulting harm to effective capacity. Subsequently, explicit formulas were derived for determining whether adding fixed forced idle time (but holding variance constant) would reduce waiting time and, if so, the optimal amount of idle time to add. However, research to date has offered little guidance to reduce setup time variance to improve waiting time. We show that a greater reduction is achievable by adding a variable idle time that is a nonincreasing function of setup time and thereby reduce the combined setup time variance. We provide explicit procedures for finding the optimal variable idle time as a function of setup time when the latter follows any finite discrete distribution. We also show how to implement our policy and show that our approach can improve waiting time even when other currently known approaches cannot.", "", "We consider two-queue polling models with the special feature that a timer mechanism is employed at Q1: whenever the server polls Q1 and finds it empty, it activates a timer and remains dormant, waiting for the first arrival. If such an arrival occurs before the timer expires, a busy period starts in accordance with Q1's service discipline. However, if the timer is shorter than the interarrival time to Q1, the server does not wait any more and switches back to Q2. We consider three configurations: (i) Q1 is controlled by the 1-limited protocol while Q2 is served exhaustively, (ii) Q1 employs the exhaustive regime while Q2 follows the 1-limited procedure, and (iii) both queues are served exhaustively. In all cases, we assume Poisson arrivals and allow general service and switchover time distributions. Our main results include the queue length distributions at polling instants, the waiting time distributions and the distribution of the total workload in the system.", "" ] }
1612.04557
2615529094
We consider polling models in the sense of Takagi (MIT Press, 1986). In our case, the feature of the server is that it may be forced to wait idly for new messages at an empty queue instead of switching to the next station. We propose four different wait-and-see strategies that govern these waiting periods. We assume Poisson arrivals for new messages and allow general service and switchover time distributions. The results are formulas for the mean average queueing delay and characterisations of the cases where the wait-and-see strategies yield a lower delay compared to the exhaustive strategy.
Furthermore, there are several works that investigate polling models with time-limited service. There, messages are served at a station for a certain period of time or until the queue is empty, whichever occurs first. If there is still work at the station when the timer expires, the server either finishes all the present work, or completes only the service of the currently served message, or stops working immediately at this station and switches to the next station. We refer to @cite_11 @cite_3 @cite_12 @cite_6 @cite_13 for random time limits (in particular exponentially distributed timers). @cite_4 and @cite_8 , deterministic time limits are studied.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_3", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2145630138", "73065067", "2099320354", "2140659572", "2403757330", "2056553365", "2110323780" ], "abstract": [ "Polling systems have long been the subject of study and are of particular interest in the analysis of high-speed communications networks. There are many options for the scheduling policies that can be used at each polling station (exhaustive, gated, customer limited, etc.). In addition, one can impose an upper bound on the total service time delivered to customers at a station per server visit. In the most common case the upper bound is a constant for each polling station, and the resulting system model is not Markovian even when service times and interarrival times are exponential. In the paper, a comprehensive solution is developed for the major scheduling policies with time limits for each polling station. The approach is based on studying the embedded Markov chain defined at the sequence of epochs when the server arrives at each polling station. The computation of transition probabilities for the embedded chain requires transient analysis of the Markov process describing the system evolution between epochs. Uniformization methods are used to develop efficient algorithms for the transition probabilities and for system performance measures. Example problems are solved using the techniques developed to illustrate the utility of the results. >", "In this paper, we consider a cyclic polling system in which arrivals are governed by the Markovian arrival process. Each queue is visited according to the exhaustive time-limited service discipline to coincide with IEEE 802.5 and 802.4 standards. Using the decomposition approach, each queue is analyzed as a single server queue with vacation. By exploiting the properties of the discrete time phase distribution we construct the vacation period from the visit period of the other queues in the polling system. Using an iterative procedure we were able to compute the queue length distribution and the average waiting time for polling systems with finite capacity (or infinite capacity) in all queues. Comparison of the mean waiting time with simulation results shows that the proposed models give reasonable results.", "We analyze an asymmetric cyclic-service system with multiple queues and nonpreemptive, time-limited service. The time limit for a server visit at each queue is exponentially distributed. Customer service times and changeover times have general distributions. Using discrete Fourier transforms, the queue-length and delay distributions are solved. >", "This thesis presents models for the performance analysis of a recent communication paradigm: mobile ad hoc networking. The objective of mobile ad hoc networking is to provide wireless connectivity between stations in a highly dynamic environment. These dynamics are driven by the mobility of stations and by breakdowns of stations, and may lead to temporary disconnectivity of parts of the network. Applications of this novel paradigm can be found in telecommunication services, but also in manufacturing systems, road-traffic control, animal monitoring and emergency networking. The performance of mobile ad hoc networks in terms of buffer occupancy and delay is quantified in this thesis by employing specific queueing models, viz., time-limited polling models. These polling models capture the uncontrollable characteristic of link availability in mobile ad hoc networks. Particularly, a novel, so-called pure exponential time-limited, service discipline is introduced in the context of polling systems. The highlighted performance characteristics for these polling systems include the stability, the queue lengths and the sojourn times of the customers. Stability conditions prescribe limits on the amount of tra±c that can be sustained by the system, so that the establishment of these conditions is a fundamental keystone in the analysis of polling models. Moreover, both exact and approximate analysis is presented for the queue length and sojourn time in time-limited polling systems with a single server. These exact analytical techniques are extended to multi-server polling systems operating under the pure time-limited service discipline. Such polling systems with multiple servers effectively may reflect large communication networks with multiple simultaneously active links, while the systems with a single server represent performance models for small networks in which a single communication link can be active at a time.", "In this paper,we consider two-queue polling model with a Timer and a Randomly-Timed Gated(RTG) mechanism.At queue Q1,we employ a Timer T(1):whenever the server polls queue Q1 and finds it empty,it activates a Timer.If a customer arrives before the Timer expires,a busy period starts in accordance with exhaustive service discipline.However,if the Timer is shorter than the interarrival time to queue Q1,the server does not wait any more and switches back to queue Q2.At queue Q2,we operate a RTG mechanism T(2),that is,whenever the server reenters queue Q2,an exponential time T(2) is activated.If the server empties the queue before T(2),it immediately leaves for queue Q1.Otherwise,the server completes all the work accumulated up to time T(2) and leaves.Under the assumption of Poisson arrivals,general service and switchover time distributions,we obtain probability generating function(PGF) of the queue lengths at polling instant and mean cycle length and Laplace Stieltjes transform(LST) of the workload.", "Polling systems under the Randomly Timed Gated (RTG) regime are studied and analyzed, and various performance measures are derived. The RTG protocol operates as follows: whenever the server enters a station, a Timer is activated. If the server empties the queue before the Timer‘s expiration, it moves on to the next node. Otherwise (i.e., if there is still work in the station when the Timer expires), the server obeys one of the following rules, each leading to a different model: (1) The server completes all the work accumulated up to the Timer‘s expiration and then moves on to the next node. (2) The server completes only the service of the job currently being served, and moves on. (3) The server stops working immediately and moves on. The RTG protocol defines a wide class of time-based control mechanisms: under rule (1) it spans an entire spectrum of regimes lying between the Gated and the Exhaustive, while under rule (2) it spans an entire spectrum between the 1-Limited and the Exhaustive protocols", "In this paper, we develop a general framework to analyze polling systems with either the autonomous-server or the time-limited service discipline. According to the autonomous-server discipline, the server continues servicing a queue for a certain period of time. According to the time-limited service discipline, the server continues servicing a queue for a certain period of time or until the queue becomes empty, whichever occurs first. We consider Poisson batch arrivals and phase-type service times. It is known that these disciplines do not satisfy the well-known branching property in polling systems. Therefore, hardly any exact results exist in the literature. Our strategy is to apply an iterative scheme that is based on relating in closed-form the joint queue-lengths at the beginning and the end of a server visit to a queue. These kernel relations are derived using the theory of absorbing Markov chains." ] }
1612.04357
2952010110
In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.
Deep Generative Image Models. There has been a large body of work on generative image modeling with deep learning. Some early efforts include Restricted Boltzmann Machines @cite_31 and Deep Belief Networks @cite_50 . More recently, several successful paradigms of deep generative models have emerged, including the auto-regressive models @cite_43 @cite_38 @cite_52 @cite_9 @cite_51 @cite_57 , Variational Auto-encoders (VAEs) @cite_30 @cite_21 @cite_41 @cite_17 @cite_34 , and Generative Adversarial Networks (GANs) @cite_22 @cite_8 @cite_28 @cite_18 @cite_63 @cite_35 . Our work builds upon the GAN framework, which employs a generator that transforms a noise vector into an image and a discriminator that distinguishes between real and generated images.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_22", "@cite_41", "@cite_43", "@cite_38", "@cite_18", "@cite_8", "@cite_21", "@cite_52", "@cite_17", "@cite_28", "@cite_57", "@cite_50", "@cite_34", "@cite_9", "@cite_63", "@cite_31", "@cite_51" ], "mid": [ "", "2202109488", "2099471712", "1909320841", "2135181320", "2952838738", "", "", "2949416428", "2953250761", "2963567641", "", "2949595773", "2100495367", "1850742715", "2953318193", "2432004435", "2116064496", "2423557781" ], "abstract": [ "", "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.", "We describe a new approach for modeling the distribution of high-dimensional vectors of discrete variables. This model is inspired by the restricted Boltzmann machine (RBM), which has been shown to be a powerful model of such distributions. However, an RBM typically does not provide a tractable distribution estimator, since evaluating the probability it assigns to some given observation requires the computation of the so-called partition function, which itself is intractable for RBMs of even moderate size. Our model circumvents this diculty by decomposing the joint distribution of observations into tractable conditional distributions and modeling each conditional using a non-linear function similar to a conditional of an RBM. Our model can also be interpreted as an autoencoder wired such that its output can be used to assign valid probabilities to observations. We show that this new model outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM.", "There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder's parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with state-of-the-art tractable distribution estimators. At test time, the method is significantly faster and scales better than other autoregressive estimators.", "", "", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.", "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.", "", "We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.", "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost." ] }
1612.04357
2952010110
In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.
However, due to the vast variations in image content, it is still challenging for GANs to generate diverse images with sufficient details. To this end, several works have attempted to factorize a GAN into a series of GANs, decomposing the difficult task into several more tractable sub-tasks. Denton al @cite_6 propose a LAPGAN model that factorizes the generative process into multi-resolution GANs, with each GAN generating a higher-resolution residual conditioned on a lower-resolution image. Although both LAPGAN and SGAN consist of a sequence of GANs each working at one scale, LAPGAN focuses on generating from coarse to fine while our SGAN aims at modeling from abstract to specific. Wang and Gupta @cite_42 propose a @math -GAN, using one GAN to generate surface normals and another GAN to generate images conditioned on surface normals. Surface normals can be viewed as a specific type of image representations, capturing the underlying 3D structure of an indoor scene. On the other hand, our framework can leverage the more general and powerful multi-level representations in a pre-trained discriminative DNN.
{ "cite_N": [ "@cite_42", "@cite_6" ], "mid": [ "2298992465", "2951523806" ], "abstract": [ "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset." ] }
1612.04357
2952010110
In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.
There are several works that use a pre-trained discriminative model to aid the training of a generator. @cite_0 @cite_39 add a regularization term that encourages the reconstructed image to be similar to the original image in the feature space of a discriminative network. @cite_14 @cite_29 use an additional style loss'' based on Gram matrices of feature activations. Different from our method, all the works above only add loss terms to regularize the generator's , without regularizing its .
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_14", "@cite_39" ], "mid": [ "2269752429", "2950689937", "2952226636", "" ], "abstract": [ "We explore the question of whether the representations learned by classifiers can be used to enhance the quality of generative models. Our conjecture is that labels correspond to characteristics of natural data which are most salient to humans: identity in faces, objects in images, and utterances in speech. We propose to take advantage of this by using the representations from discriminative classifiers to augment the objective function corresponding to a generative model. In particular we enhance the objective function of the variational autoencoder, a popular generative model, with a discriminative regularization term. We show that enhancing the objective function in this way leads to samples that are clearer and have higher visual quality than the samples from the standard variational autoencoders.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.", "" ] }
1612.04499
2585949068
Product Community Question Answering (PCQA) provides useful information about products and their features (aspects) that may not be well addressed by product descriptions and reviews. We observe that a product's compatibility issues with other products are frequently discussed in PCQA and such issues are more frequently addressed in accessories, i.e., via a yes no question "Does this mouse work with windows 10?". In this paper, we address the problem of extracting compatible and incompatible products from yes no questions in PCQA. This problem can naturally have a two-stage framework: first, we perform Complementary Entity (product) Recognition (CER) on yes no questions; second, we identify the polarities of yes no answers to assign the complementary entities a compatibility label (compatible, incompatible or unknown). We leverage an existing unsupervised method for the first stage and a 3-class classifier by combining a distant PU-learning method (learning from positive and unlabeled examples) together with a binary classifier for the second stage. The benefit of using distant PU-learning is that it can help to expand more implicit yes no answers without using any human annotated data. We conduct experiments on 4 products to show that the proposed method is effective.
The problem of Complementary Entity Recognition (CER) is first proposed by Xu et. al. @cite_6 . However, our previous work focuses on product reviews and consider CER as a special kind of aspect extraction problem @cite_0 . Determining the polarities of compatibility is reduced to a traditional sentiment classification problem. This paper focuses on yes no QAs in PCQA and the polarities of compatibility is a yes no answer classification problem.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2397482367", "2560298007" ], "abstract": [ "1. Introduction 2. The problem of sentiment analysis 3. Document sentiment classification 4. Sentence subjectivity and sentiment classification 5. Aspect sentiment classification 6. Aspect and entity extraction 7. Sentiment lexicon generation 8. Analysis of comparative opinions 9. Opinion summarization and search 10. Analysis of debates and comments 11. Mining intentions 12. Detecting fake or deceptive opinions 13. Quality of reviews.", "Product reviews contain a lot of useful information about product features and customer opinions. One important product feature is the complementary entity (products) that may potentially work together with the reviewed product. Knowing complementary entities of the reviewed product is very important because customers want to buy compatible products and avoid incompatible ones. In this paper, we address the problem of Complementary Entity Recognition (CER). Since no existing method can solve this problem, we first propose a novel unsupervised method to utilize syntactic dependency paths to recognize complementary entities. Then we expand category-level domain knowledge about complementary entities using only a few general seed verbs on a large amount of unlabeled reviews. The domain knowledge helps the unsupervised method to adapt to different products and greatly improves the precision of the CER task. The advantage of the proposed method is that it does not require any labeled data for training. We conducted experiments on 7 popular products with about 1200 reviews in total to demonstrate that the proposed approach is effective." ] }
1612.04499
2585949068
Product Community Question Answering (PCQA) provides useful information about products and their features (aspects) that may not be well addressed by product descriptions and reviews. We observe that a product's compatibility issues with other products are frequently discussed in PCQA and such issues are more frequently addressed in accessories, i.e., via a yes no question "Does this mouse work with windows 10?". In this paper, we address the problem of extracting compatible and incompatible products from yes no questions in PCQA. This problem can naturally have a two-stage framework: first, we perform Complementary Entity (product) Recognition (CER) on yes no questions; second, we identify the polarities of yes no answers to assign the complementary entities a compatibility label (compatible, incompatible or unknown). We leverage an existing unsupervised method for the first stage and a 3-class classifier by combining a distant PU-learning method (learning from positive and unlabeled examples) together with a binary classifier for the second stage. The benefit of using distant PU-learning is that it can help to expand more implicit yes no answers without using any human annotated data. We conduct experiments on 4 products to show that the proposed method is effective.
CER is closely related to entity recognition (e.g., Named Entity Recognition (NER) @cite_8 @cite_4 problem). The major differences are that many complementary entities are not named entities and CER heavily relies on the context of an entity (e.g., iPhone'' in I like my iPhone'' is not a complementary entity). Complementary entities are also studied as a social network problem in recommender systems @cite_5 @cite_11 . We discussed the benefit of CER over social network problem in @cite_6 so we omit here but keep a performance comparison in Section .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_5", "@cite_11" ], "mid": [ "2123512824", "", "2560298007", "1967959545", "2949988325" ], "abstract": [ "This paper proposes a Hidden Markov Model (HMM) and an HMM-based chunk tagger, from which a named entity (NE) recognition (NER) system is built to recognize and classify names, times and numerical quantities. Through the HMM, our system is able to apply and integrate four types of internal and external evidences: 1) simple deterministic internal feature of the words, such as capitalization and digitalization; 2) internal semantic feature of important triggers; 3) internal gazetteer feature; 4) external macro context feature. In this way, the NER problem can be resolved effectively. Evaluation of our system on MUC-6 and MUC-7 English NE tasks achieves F-measures of 96.6 and 94.1 respectively. It shows that the performance is significantly better than reported by any other machine-learning system. Moreover, the performance is even consistently better than those based on handcrafted rules.", "", "Product reviews contain a lot of useful information about product features and customer opinions. One important product feature is the complementary entity (products) that may potentially work together with the reviewed product. Knowing complementary entities of the reviewed product is very important because customers want to buy compatible products and avoid incompatible ones. In this paper, we address the problem of Complementary Entity Recognition (CER). Since no existing method can solve this problem, we first propose a novel unsupervised method to utilize syntactic dependency paths to recognize complementary entities. Then we expand category-level domain knowledge about complementary entities using only a few general seed verbs on a large amount of unlabeled reviews. The domain knowledge helps the unsupervised method to adapt to different products and greatly improves the precision of the CER task. The advantage of the proposed method is that it does not require any labeled data for training. We conducted experiments on 7 popular products with about 1200 reviews in total to demonstrate that the proposed approach is effective.", "In this paper, we introduce the method tagging substitute-complement attributes on miscellaneous recommending relations, and elaborate how this step contributes to electronic merchandising. There are already decades of works in building recommender systems. Steadily outperforming previous algorithms is difficult under the conventional framework. However, in real merchandising scenarios, we find describing the weight of recommendation simply as a scalar number is hardly expressive, which hinders the further progress of recommender systems. We study a large log of user browsing data, revealing the typical substitute complement relations among items that can further extend recommender systems in enriching the presentation and improving the practical quality. Finally, we provide an experimental analysis and sketch an online prototype to show that tagging attributes can grant more intelligence to recommender systems by differentiating recommended candidates to fit respective scenarios.", "In a modern recommender system, it is important to understand how products relate to each other. For example, while a user is looking for mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. These two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Here we develop a method to infer networks of substitutable and complementary products. We formulate this as a supervised link prediction task, where we learn the semantics of substitutes and complements from data associated with products. The primary source of data we use is the text of product reviews, though our method also makes use of features such as ratings, specifications, prices, and brands. Methodologically, we build topic models that are trained to automatically discover topics from text that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews." ] }
1612.04499
2585949068
Product Community Question Answering (PCQA) provides useful information about products and their features (aspects) that may not be well addressed by product descriptions and reviews. We observe that a product's compatibility issues with other products are frequently discussed in PCQA and such issues are more frequently addressed in accessories, i.e., via a yes no question "Does this mouse work with windows 10?". In this paper, we address the problem of extracting compatible and incompatible products from yes no questions in PCQA. This problem can naturally have a two-stage framework: first, we perform Complementary Entity (product) Recognition (CER) on yes no questions; second, we identify the polarities of yes no answers to assign the complementary entities a compatibility label (compatible, incompatible or unknown). We leverage an existing unsupervised method for the first stage and a 3-class classifier by combining a distant PU-learning method (learning from positive and unlabeled examples) together with a binary classifier for the second stage. The benefit of using distant PU-learning is that it can help to expand more implicit yes no answers without using any human annotated data. We conduct experiments on 4 products to show that the proposed method is effective.
Community Question and Answering (CQA) has been well studied in literature @cite_15 @cite_2 @cite_9 @cite_3 . More specifically, product Community Question and Answering (PCQA) is studied in @cite_7 @cite_1 . They both try to find relevance between reviews and questions. @cite_7 takes questions from PCQA as queries and retrieve relevant reviews that can answer those queries. @cite_1 considers questions in PCQA as summaries of reviews to help customers to identify relevant reviews.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_15" ], "mid": [ "2243869100", "2036226015", "2338709924", "2134406267", "2036127744", "2161152375" ], "abstract": [ "Online reviews are often our first port of call when considering products and purchases online. When evaluating a potential purchase, we may have a specific query in mind, e.g. will this baby seat fit in the overhead compartment of a 747?' or will I like this album if I liked Taylor Swift's 1989?'. To answer such questions we must either wade through huge volumes of consumer reviews hoping to find one that is relevant, or otherwise pose our question directly to the community via a Q A system. In this paper we hope to fuse these two paradigms: given a large volume of previously answered queries about products, we hope to automatically learn whether a review of a product is relevant to a given query. We formulate this as a machine learning problem using a mixture-of-experts-type framework---here each review is an expert' that gets to vote on the response to a particular query; simultaneously we learn a relevance function such that relevant' reviews are those that vote correctly. At test time this learned relevance function allows us to surface reviews that are relevant to new queries on-demand. We evaluate our system, Moqa, on a novel corpus of 1.4 million questions (and answers) and 13 million reviews. We show quantitatively that it is effective at addressing both binary and open-ended queries, and qualitatively that it surfaces reviews that human evaluators consider to be relevant.", "Community Question Answering (CQA) service provides a platform for increasing number of users to ask and answer for their own needs but unanswered questions still exist within a fixed period. To address this, the paper aims to route questions to the right answerers who have a top rank in accordance of their previous answering performance. In order to rank the answerers, we propose a framework called Question Routing (QR) which consists of four phases: (1) performance profiling, (2) expertise estimation, (3) availability estimation, and (4) answerer ranking. Applying the framework, we conduct experiments with Yahoo! Answers dataset and the results demonstrate that on average each of 1,713 testing questions obtains at least one answer if it is routed to the top 20 ranked answerers.", "Product reviews have become an important resource for customers before they make purchase decisions. However, the abundance of reviews makes it difficult for customers to digest them and make informed choices. In our study, we aim to help customers who want to quickly capture the main idea of a lengthy product review before they read the details. In contrast with existing work on review analysis and document summarization, we aim to retrieve a set of real-world user questions to summarize a review. In this way, users would know what questions a given review can address and they may further read the review only if they have similar questions about the product. Specifically, we design a two-stage approach which consists of question retrieval and question diversification. We first propose probabilistic retrieval models to locate candidate questions that are relevant to a review. We then design a set function to re-rank the questions with the goal of rewarding diversity in the final question set. The set function satisfies submodularity and monotonicity, which results in an efficient greedy algorithm of submodular optimization. Evaluation on product reviews from two categories shows that the proposed approach is effective for discovering meaningful questions that are representative for individual reviews.", "Question answering (Q&A) websites are now large repositories of valuable knowledge. While most Q&A sites were initially aimed at providing useful answers to the question asker, there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience. As part of this shift, specific expertise and deep knowledge of the subject at hand have become increasingly important, and many Q&A sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content. To better understand this shift in focus from one-off answers to a group knowledge-creation process, we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis, in contrast with the focus on individual question-answer pairs that characterized previous work. Our investigation considers the dynamics of the community activity that shapes the set of answers, both how answers and voters arrive over time and how this influences the eventual outcome. For example, we observe significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed, and that the probability of an answer being chosen as the best one strongly depends on temporal characteristics of answer arrivals. We then show that our understanding of such properties is naturally applicable to predicting several important quantities, including the long-term value of the question and its answers, as well as whether a question requires a better answer. Finally, we discuss the implications of these results for the design of Q&A sites.", "Large general-purposed community question-answering sites are becoming popular as a new venue for generating knowledge and helping users in their information needs. In this paper we analyze the characteristics of knowledge generation and user participation behavior in the largest question-answering online community in South Korea, Naver Knowledge-iN. We collected and analyzed over 2.6 million question answer pairs from fifteen categories between 2002 and 2007, and have interviewed twenty six users to gain insights into their motivations,roles, usage and expertise. We find altruism, learning, and competency are frequent motivations for top answerers to participate, but that participation is often highly intermittent. Using a simple measure of user performance, we find that higher levels of participation correlate with better performance. We also observe that users are motivated in part through a point system to build a comprehensive knowledge database. These and other insights have significant implications for future knowledge generating online communities.", "Question answering communities such as Naver and Yahoo! Answers have emerged as popular, and often effective, means of information seeking on the web. By posting questions for other participants to answer, information seekers can obtain specific answers to their questions. Users of popular portals such as Yahoo! Answers already have submitted millions of questions and received hundreds of millions of answers from other participants. However, it may also take hours --and sometime days-- until a satisfactory answer is posted. In this paper we introduce the problem of predicting information seeker satisfaction in collaborative question answering communities, where we attempt to predict whether a question author will be satisfied with the answers submitted by the community participants. We present a general prediction model, and develop a variety of content, structure, and community-focused features for this task. Our experimental results, obtained from a largescale evaluation over thousands of real questions and user ratings, demonstrate the feasibility of modeling and predicting asker satisfaction. We complement our results with a thorough investigation of the interactions and information seeking patterns in question answering communities that correlate with information seeker satisfaction. Our models and predictions could be useful for a variety of applications such as user intent inference, answer ranking, interface design, and query suggestion and routing." ] }
1612.04499
2585949068
Product Community Question Answering (PCQA) provides useful information about products and their features (aspects) that may not be well addressed by product descriptions and reviews. We observe that a product's compatibility issues with other products are frequently discussed in PCQA and such issues are more frequently addressed in accessories, i.e., via a yes no question "Does this mouse work with windows 10?". In this paper, we address the problem of extracting compatible and incompatible products from yes no questions in PCQA. This problem can naturally have a two-stage framework: first, we perform Complementary Entity (product) Recognition (CER) on yes no questions; second, we identify the polarities of yes no answers to assign the complementary entities a compatibility label (compatible, incompatible or unknown). We leverage an existing unsupervised method for the first stage and a 3-class classifier by combining a distant PU-learning method (learning from positive and unlabeled examples) together with a binary classifier for the second stage. The benefit of using distant PU-learning is that it can help to expand more implicit yes no answers without using any human annotated data. We conduct experiments on 4 products to show that the proposed method is effective.
Extracting compatible incompatible products from PCQA is very important. Based on our experience of annotating PCQA, we notice that PCQA usually addresses compatibility issues that are not well addressed by product description. This is because the number of complementary products for a target product can be unlimited so it is impractical to cover all of them. We also bring out the test dataset used in @cite_6 for a comparison (Section ). We notice that PCQA addresses compatibility issues in a different perspective compared to product reviews. PCQA tends to be specific on compatibility issues; reviews are free to talk about their experiences (e.g, opinions on features aspects). For example, customers tend to ask more specific questions like Will it work with Surface Pro 3'' rather than Will it work with my tablet?'' since the latter question is pointless; reviews are typical datasets for opinion mining and aspects extraction @cite_0 . Also, it is common to see general complementary products like It works with my tablet.'' in reviews since reviewers do not need to specify which tablet they have.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2397482367", "2560298007" ], "abstract": [ "1. Introduction 2. The problem of sentiment analysis 3. Document sentiment classification 4. Sentence subjectivity and sentiment classification 5. Aspect sentiment classification 6. Aspect and entity extraction 7. Sentiment lexicon generation 8. Analysis of comparative opinions 9. Opinion summarization and search 10. Analysis of debates and comments 11. Mining intentions 12. Detecting fake or deceptive opinions 13. Quality of reviews.", "Product reviews contain a lot of useful information about product features and customer opinions. One important product feature is the complementary entity (products) that may potentially work together with the reviewed product. Knowing complementary entities of the reviewed product is very important because customers want to buy compatible products and avoid incompatible ones. In this paper, we address the problem of Complementary Entity Recognition (CER). Since no existing method can solve this problem, we first propose a novel unsupervised method to utilize syntactic dependency paths to recognize complementary entities. Then we expand category-level domain knowledge about complementary entities using only a few general seed verbs on a large amount of unlabeled reviews. The domain knowledge helps the unsupervised method to adapt to different products and greatly improves the precision of the CER task. The advantage of the proposed method is that it does not require any labeled data for training. We conducted experiments on 7 popular products with about 1200 reviews in total to demonstrate that the proposed approach is effective." ] }
1612.04499
2585949068
Product Community Question Answering (PCQA) provides useful information about products and their features (aspects) that may not be well addressed by product descriptions and reviews. We observe that a product's compatibility issues with other products are frequently discussed in PCQA and such issues are more frequently addressed in accessories, i.e., via a yes no question "Does this mouse work with windows 10?". In this paper, we address the problem of extracting compatible and incompatible products from yes no questions in PCQA. This problem can naturally have a two-stage framework: first, we perform Complementary Entity (product) Recognition (CER) on yes no questions; second, we identify the polarities of yes no answers to assign the complementary entities a compatibility label (compatible, incompatible or unknown). We leverage an existing unsupervised method for the first stage and a 3-class classifier by combining a distant PU-learning method (learning from positive and unlabeled examples) together with a binary classifier for the second stage. The benefit of using distant PU-learning is that it can help to expand more implicit yes no answers without using any human annotated data. We conduct experiments on 4 products to show that the proposed method is effective.
Determining the polarity of a yes no answer is closely related to answer summarization subtask B in SemEval-2015 Task 3 @cite_10 . The proposed problem differs from this subtask B in that our problem indirectly utilizes the polarity of an answer to classify complementary entity rather than directly summarizes the usefulness of an answer to a question. McAuley et. al @cite_7 classifies the polarity of a PCQA answer by simply training an SVM on unigrams of labeled answers. From their predictions, we observe that they may only label explicit yes no answers (e.g., answers begin with a Yes'' or No'') and put many implicit answers (e.g., I think it works.'' implies a answer) as . Identifying more implicit yes or no answer is crucial to the proposed problems since a complementary entity does not provide much information without its compatibility label (, or ).
{ "cite_N": [ "@cite_10", "@cite_7" ], "mid": [ "2252217313", "2243869100" ], "abstract": [ "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "Online reviews are often our first port of call when considering products and purchases online. When evaluating a potential purchase, we may have a specific query in mind, e.g. will this baby seat fit in the overhead compartment of a 747?' or will I like this album if I liked Taylor Swift's 1989?'. To answer such questions we must either wade through huge volumes of consumer reviews hoping to find one that is relevant, or otherwise pose our question directly to the community via a Q A system. In this paper we hope to fuse these two paradigms: given a large volume of previously answered queries about products, we hope to automatically learn whether a review of a product is relevant to a given query. We formulate this as a machine learning problem using a mixture-of-experts-type framework---here each review is an expert' that gets to vote on the response to a particular query; simultaneously we learn a relevance function such that relevant' reviews are those that vote correctly. At test time this learned relevance function allows us to surface reviews that are relevant to new queries on-demand. We evaluate our system, Moqa, on a novel corpus of 1.4 million questions (and answers) and 13 million reviews. We show quantitatively that it is effective at addressing both binary and open-ended queries, and qualitatively that it surfaces reviews that human evaluators consider to be relevant." ] }
1612.04520
2951352024
In this paper, we propose a novel single image action recognition algorithm which is based on the idea of semantic body part actions. Unlike existing bottom up methods, we argue that the human action is a combination of meaningful body part actions. In detail, we divide human body into five parts: head, torso, arms, hands and legs. And for each of the body parts, we define several semantic body part actions, e.g., hand holding, hand waving. These semantic body part actions are strongly related to the body actions, e.g., writing, and jogging. Based on the idea, we propose a deep neural network based system: first, body parts are localized by a Semi-FCN network. Second, for each body parts, a Part Action Res-Net is used to predict semantic body part actions. And finally, we use SVM to fuse the body part actions and predict the entire body action. Experiments on two dataset: PASCAL VOC 2012 and Stanford-40 report mAP improvement from the state-of-the-art by 3.8 and 2.6 respectively.
There are mainly three existing strategies for single image action recognition: context-based approaches, part-based approaches and template-based approaches. For context-based approaches, cues from interactive objects are critical. Gkioxari al @cite_4 employ object proposals @cite_2 to find proper interactive objects. Zhang al @cite_13 propose a method that segments out the precise regions of underlying human–object interactions with minimum annotation efforts.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_2" ], "mid": [ "", "2950209802", "2088049833" ], "abstract": [ "", "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2 mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1612.04468
2585683604
Whereas CNNs have demonstrated immense progress in many vision problems, they suffer from a dependence on monumental amounts of labeled training data. On the other hand, dictionary learning does not scale to the size of problems that CNNs can handle, despite being very effective at low-level vision tasks such as denoising and inpainting. Recently, interest has grown in adapting dictionary learning methods for supervised tasks such as classification and inverse problems. We propose two new network layers that are based on dictionary learning: a sparse factorization layer and a convolutional sparse factorization layer, analogous to fully-connected and convolutional layers, respectively. Using our derivations, these layers can be dropped in to existing CNNs, trained together in an end-to-end fashion with back-propagation, and leverage semisupervision in ways classical CNNs cannot. We experimentally compare networks with these two new layers against a baseline CNN. Our results demonstrate that networks with either of the sparse factorization layers are able to outperform classical CNNs when supervised data are few. They also show performance improvements in certain tasks when compared to the CNN with no sparse factorization layers with the same exact number of parameters.
We first focus our discussion on methods seeking to bridge between artificial neural networks and sparse representation modeling. Greedy deep dictionary learning @cite_35 involves a layered, generative architecture that seeks to encode inputs through a series of sparse factorizations, which are trained individually to reconstruct the input faithfully. It, however, has no mechanism for end-to-end training and is unsupervised, limiting its potential in various tasks like classification, given modern performance statistics. Sparse autoencoders @cite_7 @cite_11 do the same using linear operations and sparsifying transforms in place of sparse factorization. Although these models can be trained in an end-to-end manner, we are not aware of any work that incorporates them into supervised or semisupervised tasks. In contrast, the SF layer that we propose subsumes these capabilities, can be trained end-to-end and can be embedded in supervised, semisupervised and unsupervised networks.
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_11" ], "mid": [ "2254140767", "", "44815768" ], "abstract": [ "In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning tools like discriminative KSVD and label consistent KSVD. Our method yields better results than all.", "", "Restricted Boltzmann machines (RBMs) have been used as generative models of many different types of data. RBMs are usually trained using the contrastive divergence learning procedure. This requires a certain amount of practical experience to decide how to set the values of numerical meta-parameters. Over the last few years, the machine learning group at the University of Toronto has acquired considerable expertise at training RBMs and this guide is an attempt to share this expertise with other machine learning researchers." ] }
1612.04468
2585683604
Whereas CNNs have demonstrated immense progress in many vision problems, they suffer from a dependence on monumental amounts of labeled training data. On the other hand, dictionary learning does not scale to the size of problems that CNNs can handle, despite being very effective at low-level vision tasks such as denoising and inpainting. Recently, interest has grown in adapting dictionary learning methods for supervised tasks such as classification and inverse problems. We propose two new network layers that are based on dictionary learning: a sparse factorization layer and a convolutional sparse factorization layer, analogous to fully-connected and convolutional layers, respectively. Using our derivations, these layers can be dropped in to existing CNNs, trained together in an end-to-end fashion with back-propagation, and leverage semisupervision in ways classical CNNs cannot. We experimentally compare networks with these two new layers against a baseline CNN. Our results demonstrate that networks with either of the sparse factorization layers are able to outperform classical CNNs when supervised data are few. They also show performance improvements in certain tasks when compared to the CNN with no sparse factorization layers with the same exact number of parameters.
Recent work in convolutional sparse coding @cite_18 @cite_5 strives to produce a sparse representation of an image by encoding the image spatially as a whole, instead of in isolated patches, which has been the standard way of incorporating sparse representations into image analysis. This method is similar to our proposed convolutional sparse factorization layer: although they embed the sparse coding operation in the convolution, we are convolutional only in spirit, choosing to map the sparse factorization approach over every patch in the image. Although this work has promise and could potentially even be used to extend the ideas in our paper, we are not aware of a result indicating that they can be incorporating into artificial neural networks, like our methods.
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "2551644333", "2169488311" ], "abstract": [ "Sparse coding has become an increasingly popular method in learning and vision for a variety of classification, reconstruction and coding tasks. The canonical approach intrinsically assumes independence between observations during learning. For many natural signals however, sparse coding is applied to sub-elements ( i.e. patches) of the signal, where such an assumption is invalid. Convolutional sparse coding explicitly models local interactions through the convolution operator, however the resulting optimization problem is considerably more complex than traditional sparse coding. In this paper, we draw upon ideas from signal processing and Augmented Lagrange Methods (ALMs) to produce a fast algorithm with globally optimal sub problems and super-linear convergence.", "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an increasingly popular method for learning visual features, it is most often trained at the patch level. Applying the resulting filters convolutionally results in highly redundant codes because overlapping patches are encoded in isolation. By training convolutionally over large image windows, our method reduces the redudancy between feature vectors at neighboring locations and improves the efficiency of the overall representation. In addition to a linear decoder that reconstructs the image from sparse features, our method trains an efficient feed-forward encoder that predicts quasi-sparse features from the input. While patch-based training rarely produces anything but oriented edge detectors, we show that convolutional training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves performance on a number of visual recognition and detection tasks." ] }
1612.04468
2585683604
Whereas CNNs have demonstrated immense progress in many vision problems, they suffer from a dependence on monumental amounts of labeled training data. On the other hand, dictionary learning does not scale to the size of problems that CNNs can handle, despite being very effective at low-level vision tasks such as denoising and inpainting. Recently, interest has grown in adapting dictionary learning methods for supervised tasks such as classification and inverse problems. We propose two new network layers that are based on dictionary learning: a sparse factorization layer and a convolutional sparse factorization layer, analogous to fully-connected and convolutional layers, respectively. Using our derivations, these layers can be dropped in to existing CNNs, trained together in an end-to-end fashion with back-propagation, and leverage semisupervision in ways classical CNNs cannot. We experimentally compare networks with these two new layers against a baseline CNN. Our results demonstrate that networks with either of the sparse factorization layers are able to outperform classical CNNs when supervised data are few. They also show performance improvements in certain tasks when compared to the CNN with no sparse factorization layers with the same exact number of parameters.
Lastly, hierarchical and multi-layered sparse coding of images @cite_9 @cite_34 @cite_38 has been performed both in a greedy fashion and end-to-end fashion. However, these typically involve heuristics and lack a principled calculus to embed them within the artificial neural network framework. They hence lack compatibility with traditional CNN components. In contrast, both of our new layers can be dropped-in to various artificial (and convolutional) neural networks and trained in the same architecture with back-propagation.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_34" ], "mid": [ "2155284717", "2020719522", "" ], "abstract": [ "Extracting good representations from images is essential for many computer vision tasks. In this paper, we propose hierarchical matching pursuit (HMP), which builds a feature hierarchy layer-by-layer using an efficient matching pursuit encoder. It includes three modules: batch (tree) orthogonal matching pursuit, spatial pyramid max pooling, and contrast normalization. We investigate the architecture of HMP, and show that all three components are critical for good performance. To speed up the orthogonal matching pursuit, we propose a batch tree orthogonal matching pursuit that is particularly suitable to encode a large number of observations that share the same large dictionary. HMP is scalable and can efficiently handle full-size images. In addition, HMP enables linear support vector machines (SVM) to match the performance of nonlinear SVM while being scalable to large datasets. We compare HMP with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer sparse coding, and kernel based feature learning. HMP consistently yields superior accuracy on three types of image classification problems: object recognition (Caltech-101), scene recognition (MIT-Scene), and static event recognition (UIUC-Sports).", "We present a method for learning image representations using a two-layer sparse coding scheme at the pixel level. The first layer encodes local patches of an image. After pooling within local regions, the first layer codes are then passed to the second layer, which jointly encodes signals from the region. Unlike traditional sparse coding methods that encode local patches independently, this approach accounts for high-order dependency among patterns in a local image neighborhood. We develop algorithms for data encoding and codebook learning, and show in experiments that the method leads to more invariant and discriminative image representations. The algorithm gives excellent results for hand-written digit recognition on MNIST and object recognition on the Caltech101 benchmark. This marks the first time that such accuracies have been achieved using automatically learned features from the pixel level, rather than using hand-designed descriptors.", "" ] }
1612.04468
2585683604
Whereas CNNs have demonstrated immense progress in many vision problems, they suffer from a dependence on monumental amounts of labeled training data. On the other hand, dictionary learning does not scale to the size of problems that CNNs can handle, despite being very effective at low-level vision tasks such as denoising and inpainting. Recently, interest has grown in adapting dictionary learning methods for supervised tasks such as classification and inverse problems. We propose two new network layers that are based on dictionary learning: a sparse factorization layer and a convolutional sparse factorization layer, analogous to fully-connected and convolutional layers, respectively. Using our derivations, these layers can be dropped in to existing CNNs, trained together in an end-to-end fashion with back-propagation, and leverage semisupervision in ways classical CNNs cannot. We experimentally compare networks with these two new layers against a baseline CNN. Our results demonstrate that networks with either of the sparse factorization layers are able to outperform classical CNNs when supervised data are few. They also show performance improvements in certain tasks when compared to the CNN with no sparse factorization layers with the same exact number of parameters.
We relate our work to past methods that directly induce sparsity into artificial neural networks. Various methods @cite_27 @cite_17 @cite_25 have indirectly induced sparsity in the parameters or activations by incorporating sparsity-inducing penalties into network loss functions. Techniques like Dropout @cite_41 and DropConnect @cite_10 artificially simulate activation sparsity during inference to prevent redundancy and improve performance. @cite_19 argue for the use of the rectified linear unit (ReLU), a sparsifying activation function, to improve performance in CNNs. This works primarily by limiting the sensitivity of the activations to small changes in the input, thereby forming a more robust and error-tolerant network. However, these methods either lack theoretical basis, offer no means for controlling the activations' sparsity, and or require that all nonzero activations are positive, unlike our work.
{ "cite_N": [ "@cite_41", "@cite_19", "@cite_27", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2095705004", "", "2111719156", "4919037", "2108665656", "2004915807" ], "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "", "This article proposes the use of a penalty function for pruning feedforward neural network by weight elimination. The penalty function proposed consists of two terms. The first term is to discourage the use of unnecessary connections, and the second term is to prevent the weights of the connections from taking excessively large values. Simple criteria for eliminating weights from the network are also given. The effectiveness of this penalty function is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems. The resulting pruned networks obtained for many of these problems have fewer connections than previously reported in the literature.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.", "Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the representation to have certain desirable properties (e.g. low dimension, sparsity, etc). Others are based on approximating density by stochastically reconstructing the input from the representation. We describe a novel and efficient algorithm to learn sparse representations, and compare it theoretically and experimentally with a similar machine trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation. We demonstrate this method by extracting features from a dataset of handwritten numerals, and from a dataset of natural image patches. We show that by stacking multiple levels of such machines and by training sequentially, high-order dependencies between the input observed variables can be captured.", "We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L 1 regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn \"well,\") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L 1 regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm---including logistic regression with L 2 regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features." ] }
1612.04005
2565779225
This paper studies a wireless network consisting of multiple transmitter-receiver pairs sharing the same spectrum where interference is regarded as noise. Previously, the throughput region of such a network was characterized for either one time slot or an infinite time horizon. This work aims to close the gap by investigating the throughput region for transmissions over a finite time horizon. We derive an efficient algorithm to examine the achievability of any given rate in the finite-horizon throughput region and provide the rate-achieving policy. The computational efficiency of our algorithm comes from the use of A* search with a carefully chosen heuristic function and a tree pruning strategy. We also show that the celebrated max-weight algorithm which finds all achievable rates in the infinite-horizon throughput region fails to work for the finite-horizon throughput region.
Analyzing the throughput region under any given modulation and coding strategy is an important issue for studying the network capacity from a network-layer perspective @cite_10 . Such studies commonly assume that the interference in the network is treated as noise, hence the capacity of each link is determined by signal-to-interference-plus-noise ratio (SINR). In this work, we take the same network-layer approach and study the throughput region of a wireless network having multiple transmitter-receiver pairs. The key difference between point-to-point systems and multi-user networks is the consideration of multiple time slots. For point-to-point systems, knowing the achievable rate and the rate-achieving transmission policy in one time slot is sufficient to derive the rate-achievable results for any number of time slots. However, this is not the case for multi-user networks, where the throughput region over multiple time slots is different from that in a single time slot. In fact, the multi-slot throughput region is generally larger than the single-slot throughput region @cite_12 @cite_11 .
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2070317877", "2138993731", "" ], "abstract": [ "We consider dynamic routing and power allocation for a wireless network with time-varying channels. The network consists of power constrained nodes that transmit over wireless links with adaptive transmission rates. Packets randomly enter the system at each node and wait in output queues to be transmitted through the network to their destinations. We establish the capacity region of all rate matrices ( spl lambda sub ij ) that the system can stably support-where spl lambda sub ij represents the rate of traffic originating at node i and destined for node j. A joint routing and power allocation policy is developed that stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region. Such performance holds for general arrival and channel state processes, even if these processes are unknown to the network controller. We then apply this control algorithm to an ad hoc wireless network, where channel variations are due to user mobility. Centralized and decentralized implementations are compared, and the stability region of the decentralized algorithm is shown to contain that of the mobile relay strategy developed by Grossglauser and Tse (2002).", "Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided.", "" ] }
1612.04005
2565779225
This paper studies a wireless network consisting of multiple transmitter-receiver pairs sharing the same spectrum where interference is regarded as noise. Previously, the throughput region of such a network was characterized for either one time slot or an infinite time horizon. This work aims to close the gap by investigating the throughput region for transmissions over a finite time horizon. We derive an efficient algorithm to examine the achievability of any given rate in the finite-horizon throughput region and provide the rate-achieving policy. The computational efficiency of our algorithm comes from the use of A* search with a carefully chosen heuristic function and a tree pruning strategy. We also show that the celebrated max-weight algorithm which finds all achievable rates in the infinite-horizon throughput region fails to work for the finite-horizon throughput region.
A number of studies investigated the achievable rates in multi-user wireless networks over an infinite number of time slots. The seminal work for infinite-horizon throughput region In this paper, the term infinite horizon' refers to an infinite number of time slots and the term finite horizon' refers to a finite number of time slots. was introduced in @cite_6 @cite_5 and further generalized in @cite_12 @cite_10 @cite_7 @cite_2 @cite_9 @cite_0 @cite_4 . These studies revealed the relationship between the exogenous data rate, which is the rate at which data arrives in the data queue of each transmitter, and the infinite-horizon throughput region formed by all the achievable rates over an infinite number of time slots. If a given exogenous rate is in the infinite-horizon throughput region, there exists a rate-achieving transmission policy to result in a stable data queue condition. It is also shown that the infinite-horizon throughput region is convex @cite_6 @cite_5 @cite_12 @cite_10 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2003684435", "2123893388", "", "2105177639", "", "2137152139", "", "2070317877", "2138993731" ], "abstract": [ "In this paper, we propose a distributed cross-layer scheduling algorithm for wireless networks with single-hop transmissions that can guarantee finite buffer sizes and meet minimum utility requirements. The algorithm can achieve a utility arbitrarily close to the optimal value with a tradeoff in the buffer sizes. The finite buffer property is not only important from an implementation perspective, but, along with the algorithm, also yields superior delay performance. In addition, another extended algorithm is provided to help construct the upper bounds of per-flow average packet delays. A novel structure of Lyapunov function is employed to prove the utility optimality of the algorithm with the introduction of novel virtual queue structures. Unlike traditional back-pressure-based optimal algorithms, our proposed algorithm does not need centralized computation and achieves fully local implementation without global message passing. Compared to other recent throughput utility-optimal CSMA distributed algorithms, we illustrate through rigorous numerical and implementation results that our proposed algorithm achieves far better delay performance for comparable throughput utility levels.", "This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a \"loosely coupled\" cross-layer solution. That is, the algorithms obtained map to different layers [transport, network, and medium access control physical (MAC PHY)] of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems", "", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >", "", "This text presents a modern theory of analysis, control, and optimization for dynamic networks. Mathematical techniques of Lyapunov drift and Lyapunov optimization are developed and shown to enable constrained optimization of time averages in general stochastic systems. The focus is on communication and queueing systems, including wireless networks with time-varying channels, mobility, and randomly arriving traffic. A simple drift-plus-penalty framework is used to optimize time averages such as throughput, throughput-utility, power, and distortion. Explicit performance-delay tradeoffs are provided to illustrate the cost of approaching optimality. This theory is also applicable to problems in operations research and economics, where energy-efficient and profit-maximizing decisions must be made without knowing the future. Topics in the text include the following: - Queue stability theory - Backpressure, max-weight, and virtual queue methods - Primal-dual methods for non-convex stochastic utility maximization - Universal scheduling theory for arbitrary sample paths - Approximate and randomized scheduling theory - Optimization of renewal systems and Markov decision systems Detailed examples and numerous problem set questions are provided to reinforce the main concepts. Table of Contents: Introduction Introduction to Queues Dynamic Scheduling Example Optimizing Time Averages Optimizing Functions of Time Averages Approximate Scheduling Optimization of Renewal Systems Conclusions", "", "We consider dynamic routing and power allocation for a wireless network with time-varying channels. The network consists of power constrained nodes that transmit over wireless links with adaptive transmission rates. Packets randomly enter the system at each node and wait in output queues to be transmitted through the network to their destinations. We establish the capacity region of all rate matrices ( spl lambda sub ij ) that the system can stably support-where spl lambda sub ij represents the rate of traffic originating at node i and destined for node j. A joint routing and power allocation policy is developed that stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region. Such performance holds for general arrival and channel state processes, even if these processes are unknown to the network controller. We then apply this control algorithm to an ad hoc wireless network, where channel variations are due to user mobility. Centralized and decentralized implementations are compared, and the stability region of the decentralized algorithm is shown to contain that of the mobile relay strategy developed by Grossglauser and Tse (2002).", "Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided." ] }
1612.04005
2565779225
This paper studies a wireless network consisting of multiple transmitter-receiver pairs sharing the same spectrum where interference is regarded as noise. Previously, the throughput region of such a network was characterized for either one time slot or an infinite time horizon. This work aims to close the gap by investigating the throughput region for transmissions over a finite time horizon. We derive an efficient algorithm to examine the achievability of any given rate in the finite-horizon throughput region and provide the rate-achieving policy. The computational efficiency of our algorithm comes from the use of A* search with a carefully chosen heuristic function and a tree pruning strategy. We also show that the celebrated max-weight algorithm which finds all achievable rates in the infinite-horizon throughput region fails to work for the finite-horizon throughput region.
Despite the theoretical importance of the infinite-horizon throughput region result, it does not provide sufficient insights into the throughput region or rate-achieving policy over a finite horizon, i.e., a finite number of time slots. In wireless networks, the network traffic, channel condition and even network topology change with time @cite_12 . Transmission should always be designed for a finite time duration, i.e., a relatively small number of time slots, such that the network and channel information used in the design is not outdated when the actual transmission happens. In addition, achieving real-time quality of service (QoS) also requires design over a finite horizon instead of an infinite horizon. To the best of our knowledge, the finite-horizon throughput region of a multi-user wireless network has not yet been investigated.
{ "cite_N": [ "@cite_12" ], "mid": [ "2138993731" ], "abstract": [ "Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided." ] }
1612.03928
2949829435
Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at this https URL
Visualizing attention maps in deep convolutional neural networks is an open problem. The simplest gradient-based way of doing that is by computing a Jacobian of network output w.r.t. input (this leads to attention visualization that are not necessarily class-discriminative), as for example in @cite_9 . Another approach was proposed by @cite_0 that consists of attaching a network called deconvnet'' that shares weights with the original network and is used to project certain features onto the image plane. A number of methods was proposed to improve gradient-based attention as well, for example guided backpropagation @cite_1 , adding a change in layers during calculation of gradient w.r.t. previous layer output. Attention maps obtained with guided backpropagation are non-class-discriminative too. Among existing methods for visualizing attention, we should also mention class activation maps @cite_24 , which are based on removing top average-pooling layer and converting the linear classification layer into a convolutional layer, producing attention maps per each class. A method combining both guided backpropagation and CAM is Grad-CAM by @cite_18 , adding image-level details to class-discriminative attention maps.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_1", "@cite_0", "@cite_24" ], "mid": [ "", "2962851944", "2123045220", "2952186574", "2950328304" ], "abstract": [ "", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them" ] }
1612.03770
2564752792
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
An adaptive DNN architecture by Calandra, et al, shows how DL can be applied to data unseen by a trained network @cite_23 . Their approach hinges on incrementally re-training deep belief networks (DBNs) whenever concept drift emerges in a monitored stream of data and operates within constant memory bounds. They utilize the generative capability of DBNs to provide training samples of previously learned classes. Class conditional sampling from trained networks has biological inspiration @cite_16 @cite_11 @cite_29 @cite_24 as well as historical and artificial neural network implementations @cite_15 @cite_4 @cite_7 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_29", "@cite_24", "@cite_23", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "", "", "", "2086162146", "2098580305", "1552248124", "1993845689", "1974367810", "2119624849" ], "abstract": [ "", "", "", "The concept of ‘sleeping on a problem’ is familiar to most of us. But with myriad stages of sleep, forms of memory and processes of memory encoding and consolidation, sorting out how sleep contributes to memory has been anything but straightforward. Nevertheless, converging evidence, from the molecular to the phenomenological, leaves little doubt that offline memory reprocessing during sleep is an important component of how our memories are formed and ultimately shaped.", "In recent years, many new cortical areas have been identified in the macaque monkey. The number of identified connections between areas has increased even more dramatically. We report here on (1) a summary of the layout of cortical areas associated with vision and with other modalities, (2) a computerized database for storing and representing large amounts of information on connectivity patterns, and (3) the application of these data to the analysis of hierarchical organization of the cerebral cortex. Our analysis concentrates on the visual system, which includes 25 neocortical areas that are predominantly or exclusively visual in function, plus an additional 7 areas that we regard as visual-association areas on the basis of their extensive visual inputs. A total of 305 connections among these 32 visual and visual-association areas have been reported. This represents 31 of the possible number of pathways it each area were connected with all others. The actual degree of connectivity is likely to be closer to 40 . The great majority of pathways involve reciprocal connections between areas. There are also extensive connections with cortical areas outside the visual system proper, including the somatosensory cortex, as well as neocortical, transitional, and archicortical regions in the temporal and frontal lobes. In the somatosensory motor system, there are 62 identified pathways linking 13 cortical areas, suggesting an overall connectivity of about 40 . Based on the laminar patterns of connections between areas, we propose a hierarchy of visual areas and of somato sensory motor areas that is more comprehensive than those suggested in other recent studies. The current version of the visual hierarchy includes 10 levels of cortical processing. Altogether, it contains 14 levels if one includes the retina and lateral geniculate nucleus at the bottom as well as the entorhinal cortex and hippocampus at the top. Within this hierarchy, there are multiple, intertwined processing streams, which, at a low level, are related to the compartmental organization of areas V1 and V2 and, at a high level, are related to the distinction between processing centers in the temporal and pariet al lobes. However, there are some pathways and relationships (about 10 of the total) whose descriptions do not fit cleanly into this hierarchical scheme for one reason or another. In most instances, though, it is unclear whether these represent genuine exceptions to a strict hierarchy rather than inaccuracies or uncertainties in the reported assignment.", "Deep learning has proven to be beneficial for complex tasks such as classifying images. However, this approach has been mostly applied to static datasets. The analysis of non-stationary (e.g., concept drift) streams of data involves specific issues connected with the temporal and changing nature of the data. In this paper, we propose a proof-of-concept method, called Adaptive Deep Belief Networks, of how deep learning can be generalized to learn online from changing streams of data. We do so by exploiting the generative properties of the model to incrementally re-train the Deep Belief Network whenever new data are collected. This approach eliminates the need to store past observations and, therefore, requires only constant memory consumption. Hence, our approach can be valuable for life-long learning from non-stationary data streams.", "An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up \"recognition\" connections convert the input into representations in successive hidden layers, and top-down \"generative\" connections reconstruct the representation in one layer from the representation in the layer above. In the \"wake\" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the \"sleep\" phase, neurons are driven by generative connections, and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above.", "Replay is the sequential reactivation of hippocampal place cells that represent previously experienced behavioral trajectories. Although first studied during sleep, recent work suggests that replay occurs frequently in the awake state and could be a potential substrate for memory consolidation and retrieval.", "Abstract Human dreaming occurs during rapid eye movement (REM) sleep. To investigate the structure of neural activity during REM sleep, we simultaneously recorded the activity of multiple neurons in the rat hippocampus during both sleep and awake behavior. We show that temporally sequenced ensemble firing rate patterns reflecting tens of seconds to minutes of behavioral experience are reproduced during REM episodes at an equivalent timescale. Furthermore, within such REM episodes behavior-dependent modulation of the subcortically driven theta rhythm is also reproduced. These results demonstrate that long temporal sequences of patterned multineuronal activity suggestive of episodic memory traces are reactivated during REM sleep. Such reactivation may be important for memory processing and provides a basis for the electrophysiological examination of the content of dream states." ] }
1612.03770
2564752792
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
Yosinski evaluated transfer capability via high-level layer reuse in specific DNNs @cite_2 . Transferring learning in this way increased recipient network performance, though the closer the target task was to the base task, the better the transfer. Transferring more specific layers could actually cause performance degradation however. Likewise, Kandaswamy, et al, used layer transfer as a means to transfer capability in Convolutional Neural Networks and Stacked Denoising AEs @cite_21 . Transferring capability in this way resulted in a reduction in overall computation time and lower classification errors. These papers use fixed-sized DNNs, except for additional output nodes for new classes, and demonstrate that features in early layers are more general than features in later layers and thus, more transferable to new classes.
{ "cite_N": [ "@cite_21", "@cite_2" ], "mid": [ "2096422372", "2949667497" ], "abstract": [ "Transfer Learning is a paradigm in machine learning to solve a target problem by reusing the learning with minor modifications from a different but related source problem. In this paper we propose a novel feature transference approach, especially when the source and the target problems are drawn from different distributions. We use deep neural networks to transfer either low or middle or higher-layer features for a machine trained in either unsupervised or supervised way. Applying this feature transference approach on Convolutional Neural Network and Stacked Denoising Autoencoder on four different datasets, we achieve lower classification error rate with significant reduction in computation time with lower-layer features trained in supervised way and higher-layer features trained in unsupervised way for classifying images of uppercase and lowercase letters dataset.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset." ] }
1612.03777
2610937929
CNN-based optical flow estimation has attracted attention recently, mainly due to its impressively high frame rates. These networks perform well on synthetic datasets, but they are still far behind the classical methods in real-world videos. This is because there is no ground truth optical flow for training these networks on real data. In this paper, we boost CNN-based optical flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction. To this end, we train the network in a hybrid way, providing it with a mixture of synthetic and real videos. With the help of a sample-variant multi-tasking architecture, the network is trained on different tasks depending on the availability of ground-truth. We also experiment with the prediction of "next-flow" instead of estimation of the current flow, which is intuitively closer to the task of next-frame prediction and yields favorable results. We demonstrate the improvement in optical flow estimation on the real-world KITTI benchmark. Additionally, we test the optical flow indirectly in an action classification scenario. As a side product of this work, we report significant improvements over state-of-the-art in the task of next-frame prediction.
The FlowNet by Dosovitskiy al @cite_20 was the first deep network trained end-to-end on optical flow estimation. It was followed by Teney al @cite_15 and Tran al @cite_14 . These supervised learning methods require training data with optical flow annotations. In @cite_20 and @cite_33 synthetic datasets were introduced to provide such data. Tran al @cite_14 applied an existing variational method to create pseudo-ground truth data.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_33", "@cite_20" ], "mid": [ "2951845494", "2272842615", "2259424905", "2951309005" ], "abstract": [ "This paper shows how to extract dense optical flow from videos with a convolutional neural network (CNN). The proposed model constitutes a potential building block for deeper architectures to allow using motion without resorting to an external algorithm, for recognition in videos. We derive our network architecture from signal processing principles to provide desired invariances to image contrast, phase and texture. We constrain weights within the network to enforce strict rotation invariance and substantially reduce the number of parameters to learn. We demonstrate end-to-end training on only 8 sequences of the Middlebury dataset, orders of magnitude less than competing CNN-based motion estimation methods, and obtain comparable performance to classical methods on the Middlebury benchmark. Importantly, our method outputs a distributed representation of motion that allows representing multiple, transparent motions, and dynamic textures. Our contributions on network design and rotation invariance offer insights nonspecific to motion estimation.", "Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive pre-processing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.", "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps." ] }
1612.03777
2610937929
CNN-based optical flow estimation has attracted attention recently, mainly due to its impressively high frame rates. These networks perform well on synthetic datasets, but they are still far behind the classical methods in real-world videos. This is because there is no ground truth optical flow for training these networks on real data. In this paper, we boost CNN-based optical flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction. To this end, we train the network in a hybrid way, providing it with a mixture of synthetic and real videos. With the help of a sample-variant multi-tasking architecture, the network is trained on different tasks depending on the availability of ground-truth. We also experiment with the prediction of "next-flow" instead of estimation of the current flow, which is intuitively closer to the task of next-frame prediction and yields favorable results. We demonstrate the improvement in optical flow estimation on the real-world KITTI benchmark. Additionally, we test the optical flow indirectly in an action classification scenario. As a side product of this work, we report significant improvements over state-of-the-art in the task of next-frame prediction.
Instead, Ahmadi and Patras @cite_27 and Yu al @cite_8 formulated the task as an unsupervised learning problem. To this end, they used a cost function based on the classical color constancy assumption, as it is used in variational techniques.
{ "cite_N": [ "@cite_27", "@cite_8" ], "mid": [ "2275385910", "2951933753" ], "abstract": [ "Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.", "Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious la- beling. To bypass these challenges, we propose an unsuper- vised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow be- tween two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empiri- cally, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset." ] }
1612.03777
2610937929
CNN-based optical flow estimation has attracted attention recently, mainly due to its impressively high frame rates. These networks perform well on synthetic datasets, but they are still far behind the classical methods in real-world videos. This is because there is no ground truth optical flow for training these networks on real data. In this paper, we boost CNN-based optical flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction. To this end, we train the network in a hybrid way, providing it with a mixture of synthetic and real videos. With the help of a sample-variant multi-tasking architecture, the network is trained on different tasks depending on the availability of ground-truth. We also experiment with the prediction of "next-flow" instead of estimation of the current flow, which is intuitively closer to the task of next-frame prediction and yields favorable results. We demonstrate the improvement in optical flow estimation on the real-world KITTI benchmark. Additionally, we test the optical flow indirectly in an action classification scenario. As a side product of this work, we report significant improvements over state-of-the-art in the task of next-frame prediction.
Video prediction has been very popular recently @cite_10 @cite_35 @cite_31 @cite_23 @cite_6 @cite_2 @cite_18 @cite_11 . Although some of these works focus on prediction as the main objective @cite_10 @cite_35 , most of them use it as an auxiliary task. Finn al @cite_31 proposed an action-conditioned video prediction model to facilitate unsupervised learning for physical interaction. Patraucean al @cite_11 learn optical flow by warping the current frame to the next one. Lotter al @cite_23 use prediction to learn representations for object recognition.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_6", "@cite_23", "@cite_2", "@cite_31", "@cite_10", "@cite_11" ], "mid": [ "2470475590", "2742479045", "", "2401640538", "2521071105", "2400532028", "2248556341", "2175030374" ], "abstract": [ "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.", "In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods.", "", "While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (\"PredNet\") architecture that is inspired by the concept of \"predictive coding\" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. Altogether, these results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.", "We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.", "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset", "We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow." ] }
1612.03777
2610937929
CNN-based optical flow estimation has attracted attention recently, mainly due to its impressively high frame rates. These networks perform well on synthetic datasets, but they are still far behind the classical methods in real-world videos. This is because there is no ground truth optical flow for training these networks on real data. In this paper, we boost CNN-based optical flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction. To this end, we train the network in a hybrid way, providing it with a mixture of synthetic and real videos. With the help of a sample-variant multi-tasking architecture, the network is trained on different tasks depending on the availability of ground-truth. We also experiment with the prediction of "next-flow" instead of estimation of the current flow, which is intuitively closer to the task of next-frame prediction and yields favorable results. We demonstrate the improvement in optical flow estimation on the real-world KITTI benchmark. Additionally, we test the optical flow indirectly in an action classification scenario. As a side product of this work, we report significant improvements over state-of-the-art in the task of next-frame prediction.
The works by Pintea al @cite_30 , Walker al @cite_32 @cite_7 , Jayaraman al @cite_4 , and Vondrick al @cite_5 focus on motion prediction. Their predicted motion is conditioned on a single input frame. In contrast, we model future motion based on current motion and the scene content by making explicit use of two consecutive frames as input.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_32", "@cite_5" ], "mid": [ "", "2346437620", "1930563420", "2952390294", "2951242004" ], "abstract": [ "", "Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data. In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for this \"active recognition\" setting. Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world. To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent's motions on its internal representation of the environment conditional on all past views. Results across two challenging datasets confirm both that our end-to-end system successfully learns meaningful policies for active category recognition, and that \"learning to look ahead\" further boosts recognition performance.", "Given a scene, what is going to move, and in what direction will it move? Such a question could be considered a non-semantic form of action prediction. In this work, we present a convolutional neural network (CNN) based approach for motion prediction. Given a static image, this CNN predicts the future motion of each and every pixel in the image in terms of optical flow. Our CNN model leverages the data in tens of thousands of realistic videos to train our model. Our method relies on absolutely no human labeling and is able to predict motion based on the context of the scene. Because our CNN model makes no assumptions about the underlying scene, it can predict future optical flow on a diverse set of scenarios. We outperform all previous approaches by large margins.", "In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks.", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future." ] }
1612.03362
2565909455
Understanding community structure in social media is critical due to its broad applications such as friend recommendations, link predictions and collaborative filtering. However, there is no widely accepted definition of community in literature. Existing work use structure related metrics such as modularity and function related metrics such as ground truth to measure the performance of community detection algorithms, while ignoring an important metric, size of the community. [1] suggests that the size of community with strong ties in social media should be limited to 150. As we discovered in this paper, the majority of the communities obtained by many popular community detection algorithms are either very small or very large. Too small communities don't have practical value and too large communities contain weak connections therefore not stable. In this paper, we compare various community detection algorithms considering the following metrics: size of the communities, coverage of the communities, extended modularity, triangle participation ratio, and user interest in the same community. We also propose a simple clique based algorithm for community detection as a baseline for the comparison. Experimental results show that both our proposed algorithm and the well-accepted disjoint algorithm InfoMap perform well in all the metrics.
There exist many community detection algorithms in literature. We can categorize them into disjoint algorithms and overlapping algorithms, based on whether the identified communities have overlap or not. Infomap @cite_10 stands out as the most popular and widely used disjoint algorithm. The Infomap algorithm is based on random walks on networks combined with coding theory with the intent of understanding how information flows within a network. Multilevel @cite_14 is a heuristic based algorithm based on modularity optimization. Multilevel first assigns every node to a separate community, then selects a node and checks the neighboring nodes attempting to group the neighboring node with the selected node into a community if the grouping results in an increase in modularity. Newman's Leading Eigenvector @cite_2 works by moving the maximization process to the eigenspectrum to maximize modularity by using a matrix known as the modularity matrix. Fast Greedy @cite_6 is based upon modularity as well. It uses a greedy approach to optimize modularity. In the category of overlapping algorithm, Clique Percolation Method @cite_9 is the most prominent which merges two cliques into a community if they overlap more than a threshold.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "", "2120698362", "2089458547", "2015953751", "2164998314" ], "abstract": [ "", "In this paper we introduce a non-fuzzy measure which has been designed to rank the partitions of a network's nodes into overlapping communities. Such a measure can be useful for both quantifying clusters detected by various methods and during finding the overlapping community structure by optimization methods. The theoretical problem referring to the separation of overlapping modules is discussed, and an example for possible applications is given as well.", "Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.", "We consider the problem of detecting communities or modules in networks, groups of vertices with a higher-than-average density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as \"modularity\" over possible divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a number of possible algorithms for detecting community structure, as well as several other results, including a spectral measure of bipartite structure in networks and a new centrality measure that identifies those vertices that occupy central positions within the communities to which they belong. The algorithms and measures proposed are illustrated with applications to a variety of real-world complex networks.", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences." ] }
1612.03433
2494868292
Consensus formation is investigated for multi-agent systems in which agents' beliefs are both vague and uncertain. Vagueness is represented by a third truth state meaning borderline. This is combined with a probabilistic model of uncertainty. A belief combination operator is then proposed, which exploits borderline truth values to enable agents with conflicting beliefs to reach a compromise. A number of simulation experiments are carried out, in which agents apply this operator in pairwise interactions, under the bounded confidence restriction that the two agents' beliefs must be sufficiently consistent with each other before agreement can be reached. As well as studying the consensus operator in isolation, we also investigate scenarios in which agents are influenced either directly or indirectly by the state of the world. For the former, we conduct simulations that combine consensus formation with belief updating based on evidence. For the latter, we investigate the effect of assuming that the closer an agent's beliefs are to the truth the more visible they are in the consensus building process. In all cases, applying the consensus operators results in the population converging to a single shared belief that is both crisp and certain. Furthermore, simulations that combine consensus formation with evidential updating converge more quickly to a shared opinion, which is closer to the actual state of the world than those in which beliefs are only changed as a result of directly receiving new evidence. Finally, if agent interactions are guided by belief quality measured as similarity to the true state of the world, then applying the consensus operator alone results in the population converging to a high-quality shared belief.
One common feature of most of the studies mentioned above is that, either explicitly or implicitly, they interpret the third truth value as meaning uncertain' or unknown'. In contrast, as stated in section 1, we intend the middle truth value to refer to borderline cases resulting from the underlying vagueness of the language. So, for example, giving the proposition Ethel is short' the intermediate truth value means that Ethel's height is borderline short not short, rather than meaning that Ethel's height is unknown. This approach allows us to distinguish between vagueness and uncertainty so that, for instance, based on their knowledge of Ethel's height an agent could be certain that she is borderline short. A more detailed analysis of the difference between these two possible interpretations of the third truth state is given in @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "1989393769" ], "abstract": [ "In this paper we compare the expressive power of elementary representation formats for vague, incomplete or conflicting information. These include Boolean valuation pairs introduced by Lawry and Gonzalez-Rodriguez, orthopairs of sets of variables, Boolean possibility and necessity measures, three-valued valuations, supervaluations. We make explicit their connections with strong Kleene logic and with Belnap logic of conflicting information. The formal similarities between 3-valued approaches to vagueness and formalisms that handle incomplete information often lead to a confusion between degrees of truth and degrees of uncertainty. Yet there are important differences that appear at the interpretive level: while truth-functional logics of vagueness are accepted by a part of the scientific community (even if questioned by supervaluationists), the truth-functionality assumption of three-valued calculi for handling incomplete information looks questionable, compared to the non-truth-functional approaches based on Boolean possibility-necessity pairs. This paper aims to clarify the similarities and differences between the two situations. We also study to what extent operations for comparing and merging information items in the form of orthopairs can be expressed by means of operations on valuation pairs, three-valued valuations and underlying possibility distributions. We explore the connections between several representations of imperfect information.In each case we compare the expressive power of these formalisms.In each case we study how to express aggregation operations.We demonstrate the formal similarities among these approaches.We point out the differences in interpretations between these approaches." ] }
1612.03433
2494868292
Consensus formation is investigated for multi-agent systems in which agents' beliefs are both vague and uncertain. Vagueness is represented by a third truth state meaning borderline. This is combined with a probabilistic model of uncertainty. A belief combination operator is then proposed, which exploits borderline truth values to enable agents with conflicting beliefs to reach a compromise. A number of simulation experiments are carried out, in which agents apply this operator in pairwise interactions, under the bounded confidence restriction that the two agents' beliefs must be sufficiently consistent with each other before agreement can be reached. As well as studying the consensus operator in isolation, we also investigate scenarios in which agents are influenced either directly or indirectly by the state of the world. For the former, we conduct simulations that combine consensus formation with belief updating based on evidence. For the latter, we investigate the effect of assuming that the closer an agent's beliefs are to the truth the more visible they are in the consensus building process. In all cases, applying the consensus operators results in the population converging to a single shared belief that is both crisp and certain. Furthermore, simulations that combine consensus formation with evidential updating converge more quickly to a shared opinion, which is closer to the actual state of the world than those in which beliefs are only changed as a result of directly receiving new evidence. Finally, if agent interactions are guided by belief quality measured as similarity to the true state of the world, then applying the consensus operator alone results in the population converging to a high-quality shared belief.
The idea of bounded confidence @cite_9 @cite_2 has been proposed as a mechanism by which agents limit their interactions with others, so that they only combine their beliefs with individuals holding opinions which are sufficiently similar to their current view. A version of bounded confidence is also used in our proposed model, where each agent measures the relative inconsistency of their beliefs with those of others, and is then only willing to combine beliefs with agents whose inconsistency measure is below a certain threshold.
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2113096089", "1582135188" ], "abstract": [ "When does opinion formation within an interacting group lead to consensus, polarization or fragmentation? The article investigates various models for the dynamics of continuous opinions by analytical methods as well as by computer simulations. Section 2 develops within a unified framework the classical model of consensus formation, the variant of this model due to Friedkin and Johnsen, a time-dependent version and a nonlinear version with bounded confidence of the agents. Section 3 presents for all these models major analytical results. Section 4 gives an extensive exploration of the nonlinear model with bounded confidence by a series of computer simulations. An appendix supplies needed mathematical definitions, tools, and theorems.", "Abstract: We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model." ] }
1612.03540
2565375433
Let D be a Jordan domain in the plane. We consider a pursuit-evasion, contamination clearing, or sensor sweep problem in which the pursuer at each point in time is modeled by a continuous curve, called the sensor curve. Both time and space are continuous, and the intruders are invisible to the pursuer. Given D, what is the shortest length of a sensor curve necessary to provide a sweep of domain D, so that no continuously-moving intruder in D can avoid being hit by the curve? We define this length to be the sweeping cost of D. We provide an analytic formula for the sweeping cost of any Jordan domain in terms of the geodesic Frechet distance between two curves on the boundary of D with non-equal winding numbers. As a consequence, we show that the sweeping cost of any convex domain is equal to its width, and that a convex domain of unit area with maximal sweeping cost is the equilateral triangle.
A wide variety of pursuit-evasion problems have appeared in the mathematics, computer science, engineering, and robotics literature; see @cite_17 for a survey. Space can modeled in a discrete fashion, for example by a graph @cite_24 @cite_19 , or as a continuous domain in Euclidean space as we consider here. Time can similarly be discrete (turn-based) or continuous, as is our case. See @cite_13 @cite_18 @cite_6 @cite_10 @cite_1 @cite_16 for a selection of such problems.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_1", "@cite_6", "@cite_24", "@cite_19", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2597894505", "2167485994", "2019094329", "2110670509", "1561296992", "2395279694", "2137766675", "2139676984", "40650588" ], "abstract": [ "In a pursuit-evasion game, a team of pursuers attempt to capture an evader. The players alternate turns, move with equal speed, and have full information about the state of the game. We consider the most restrictive capture condition: a pursuer must become colocated with the evader to win the game. We prove two general results about this adversarial motion planning problem in geometric spaces. First, we show that one pursuer has a winning strategy in any compact CAT(0) space. This complements a result of Alexander, Bishop and Ghrist, who provide a winning strategy for a game with positive capture radius. Second, we consider the game played in a compact domain in Euclidean two-space with piecewise analytic boundary and arbitrary Euler characteristic. We show that three pursuers always have a winning strategy by extending recent work of Bhadauria, Klein, Isler and Suri from polygonal environments to our more general setting.", "This paper describes decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks. The control laws are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies. These utility functions are studied in geographical optimization problems and they arise naturally in vector quantization and in sensor allocation tasks. The approach exploits the computational geometry of spatial structures such as Voronoi diagrams.", "Previous work on the coverage of mobile sensor networks focuses on algorithms to reposition sensors in order to achieve a static configuration with an enlarged covered area. In this paper, we study the dynamic aspects of the coverage of a mobile sensor network that depend on the process of sensor movement. As time goes by, a position is more likely to be covered; targets that might never be detected in a stationary sensor network can now be detected by moving sensors. We characterize the area coverage at specific time instants and during time intervals, as well as the time it takes to detect a randomly located stationary target. Our results show that sensor mobility can be exploited to compensate for the lack of sensors and improve network coverage. For mobile targets, we take a game theoretic approach and derive optimal mobility strategies for sensors and targets from their own perspectives.", "We study the problem of a mobile target (the mouse) trying to evade detection by one or more mobile sensors (we call such a sensor a cat) in a closed network area. We view our problem as a game between two players: the mouse, and the collection of cats forming a single (meta-)player. The game ends when the mouse falls within the sensing range of one or more cats. A cat tries to determine its optimal strategy to minimize the worst case expected detection time of the mouse. The mouse tries to determine an optimal counter movement strategy to maximize the expected detection time. We divide the problem into two cases based on the relative sensing capabilities of the cats and the mouse. When the mouse has a sensing range smaller than or equal to the cats', we develop a dynamic programming solution for the mouse's optimal strategy, assuming high level information about the cats' movement model. We discuss how the cats' chosen movement model will affect its presence matrix in the network, and hence its payoff in the game. When the mouse has a larger sensing range than the cats, we show how the mouse can determine its optimal movement strategy based on local observations of the cats' movements. We further present a coordination protocol for the cats to collaboratively catch the mouse by: 1) forming opportunistically a cohort to limit the mouse's degree of freedom in escaping detection; and 2) minimizing the overlap in the spatial coverage of the cohort's members. Extensive experimental results verify and illustrate the analytical results, and evaluate the game's payoffs as a function of several important system parameters.", "This papers surveys some of the work done on trying to capture an intruder in a graph. If the intruder may be located only at vertices, the term searching is employed. If the intruder may be located at vertices or along edges, the term sweeping is employed. There are a wide variety of applications for searching and sweeping. Old results, new results and active research directions are discussed.", "", "", "Given a lion and a man, their initial positions, and restrictions on their ranges and speeds, how quickly can the lion get within a given distance from the man? We consider the case in which the lion and man are restricted to the interior of a circle and each is limited to the same speed. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.", "This paper surveys recent results in pursuit-evasion and autonomous search relevant to applications in mobile robotics. We provide a taxonomy of search problems that highlights the differences resulting from varying assumptions on the searchers, targets, and the environment. We then list a number of fundamental results in the areas of pursuit-evasion and probabilistic search, and we discuss field implementations on mobile robotic systems. In addition, we highlight current open problems in the area and explore avenues for future work." ] }
1612.03540
2565375433
Let D be a Jordan domain in the plane. We consider a pursuit-evasion, contamination clearing, or sensor sweep problem in which the pursuer at each point in time is modeled by a continuous curve, called the sensor curve. Both time and space are continuous, and the intruders are invisible to the pursuer. Given D, what is the shortest length of a sensor curve necessary to provide a sweep of domain D, so that no continuously-moving intruder in D can avoid being hit by the curve? We define this length to be the sweeping cost of D. We provide an analytic formula for the sweeping cost of any Jordan domain in terms of the geodesic Frechet distance between two curves on the boundary of D with non-equal winding numbers. As a consequence, we show that the sweeping cost of any convex domain is equal to its width, and that a convex domain of unit area with maximal sweeping cost is the equilateral triangle.
A further important distinction in a pursuit-evasion problem is whether information is complete (pursuers and intruders know each others' locations), incomplete (pursuers and intruders are invisible to each other), or somewhere in-between. Our problem can be considered as one in which the pursuer has no knowledge of the intruders' movements, whereas the intruders have complete knowledge of the pursuer's current position and future movements. In other words, the pursuer must catch every possible intruder. Evasion problems in which the pursuer has no information and the intruders have complete information can equivalently be cast as contamination-clearing problems, see for example @cite_15 @cite_3 @cite_21 . Indeed, the contaminated region of the domain at a particular time includes all locations where an intruder could currently be located, and the uncontaminated region is necessarily free of intruders. It is the task of the pursuer to clear the entire domain of contamination, so that no possible intruders could remain undetected.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_3" ], "mid": [ "2082754619", "2125650083", "2110541499" ], "abstract": [ "We consider a pursuit-evasion problem where some lions have the task to clear a grid graph whose nodes are initially contaminated. The contamination spreads one step per time unit in each direction not blocked by a lion. A vertex is cleared from its contamination whenever a lion moves to it. [5] showed that n 2 lions are not enough to clear the n x n-grid. In this paper, we consider the same problem in dimension d > 2 and prove that Θ(nd-1 √d) lions are necessary and sufficient to clear the nd-grid. Furthermore, we analyze a problem variant where the lions are also allowed to jump from grid vertices to non-adjacent grid vertices.", "Suppose that ball-shaped sensors wander in a bounded domain. A sensor does not know its location but does know when it overlaps a nearby sensor. We say that an evasion path exists in this sensor network if a moving intruder can avoid detection. In 'Coordinate-free coverage in sensor networks with controlled boundaries via homology', Vin de Silva and Robert Ghrist give a necessary condition, depending only on the time-varying connectivity data of the sensors, for an evasion path to exist. Using zigzag persistent homology, we provide an equivalent condition that moreover can be computed in a streaming fashion. However, no method with time-varying connectivity data as input can give necessary and sufficient conditions for the existence of an evasion path. Indeed, we show that the existence of an evasion path depends not only on the fibrewise homotopy type of the region covered by sensors but also on its embedding in spacetime. For planar sensors that also measure weak rotation and distance information, we provide necessary and sufficient conditions for the existence of an evasion path.", "Tools from computational homology are introduced to verify coverage in an idealized sensor network. These methods are unique in that, while they are coordinate-free and assume no localization or orientation capabilities for the nodes, there are also no probabilistic assumptions. The key ingredient is the theory of homology from algebraic topology. The robustness of these tools is demonstrated by adapting them to a variety of settings, including static planar coverage, 3-D barrier coverage, and time-dependent sweeping coverage. Results are also given on hole repair, error tolerance, optimal coverage, and variable radii. An overview of implementation is given." ] }
1612.03540
2565375433
Let D be a Jordan domain in the plane. We consider a pursuit-evasion, contamination clearing, or sensor sweep problem in which the pursuer at each point in time is modeled by a continuous curve, called the sensor curve. Both time and space are continuous, and the intruders are invisible to the pursuer. Given D, what is the shortest length of a sensor curve necessary to provide a sweep of domain D, so that no continuously-moving intruder in D can avoid being hit by the curve? We define this length to be the sweeping cost of D. We provide an analytic formula for the sweeping cost of any Jordan domain in terms of the geodesic Frechet distance between two curves on the boundary of D with non-equal winding numbers. As a consequence, we show that the sweeping cost of any convex domain is equal to its width, and that a convex domain of unit area with maximal sweeping cost is the equilateral triangle.
The paper @cite_2 introduces the between two polylines, a notion that is very relevant for our problem. The geodesic width between two curves @math , @math is the same as the strong geodesic Fr ' e chet distance between them (Definition ) when domain @math is chosen to be a region with boundary consisting of curves @math , @math , and the two shortest paths connecting the endpoints of @math and @math . If curves @math and @math are polylines with @math vertices in total, then @cite_2 gives an @math algorithm for computing the geodesic width between them. The goal of our paper is instead to rigorously prove an analytic formula for the sweeping cost of a domain, Theorem , that is closely related to geodesic widths. Indeed, the right hand side of in Theorem is unchanged if we replace the geodesic distance between two curves (Definition ) with the weak geodesic Fr ' e chet distance between them (Definition ). The paper @cite_2 also studies sweeps of planar domains by piecewise linear curves in which the cost of a sweep is not equal to a length, but instead to the number of vertices or joints in the curve.
{ "cite_N": [ "@cite_2" ], "mid": [ "2062556295" ], "abstract": [ "We introduce two new related metrics, the geodesic width and the link width, for measuring the \"distance\" between two nonintersecting polylines in the plane. If the two polylines have n vertices in total, we present algorithms to compute the geodesic width of the two polylines in O(n 2 log n) time using O(n 2) space and the link width in O(n 3 log n) time using O(n 2) working space where n is the total number of edges of the polylines. Our computation of these metrics relies on two closely related combinatorial strutures: the shortest-path diagram and the link diagram of a simple polygon. The shortest-path (resp., link) diagram encodes the Euclidean (resp., link) shortest path distance between all pairs of points on the boundary of the polygon. We use these algorithms to solve two problems:" ] }
1612.03540
2565375433
Let D be a Jordan domain in the plane. We consider a pursuit-evasion, contamination clearing, or sensor sweep problem in which the pursuer at each point in time is modeled by a continuous curve, called the sensor curve. Both time and space are continuous, and the intruders are invisible to the pursuer. Given D, what is the shortest length of a sensor curve necessary to provide a sweep of domain D, so that no continuously-moving intruder in D can avoid being hit by the curve? We define this length to be the sweeping cost of D. We provide an analytic formula for the sweeping cost of any Jordan domain in terms of the geodesic Frechet distance between two curves on the boundary of D with non-equal winding numbers. As a consequence, we show that the sweeping cost of any convex domain is equal to its width, and that a convex domain of unit area with maximal sweeping cost is the equilateral triangle.
Related notions to the geodesic width include the isotopic Fr ' e chet distance @cite_14 and the minimum deformation area @cite_7 between two curves @math and @math . Whereas the geodesic width considers deformations between @math and @math such that no two intermediate curves intersect, this restriction is not present for the isotopic Fr ' e chet distance, which can therefore be defined between intersecting curves. The paper @cite_7 considers a distance between two curves on a 2-manifold which is instead an area: the minimal total surface area swept out by any deformation between the two curves. If the curves are piecewise linear in the plane, have @math total vertices, and have @math intersection points, then @cite_7 gives an @math algorithm to compute the minimum deformation area between them.
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "2188165015", "1999272835" ], "abstract": [ "We present a variant of the Frechet distance (as well as geodesic and homotopic Frechet distance) which forces the motion between the input objects to fol- low an ambient isotopy. This provides a measure of how much you need to continuously deform one shape into another while maintaining topologically equiva- lently shapes throughout the deformation.", "Measuring the similarity of curves is a fundamental problem arising in many application fields. There has been considerable interest in several such measures, both in Euclidean space and in more general setting such as curves on Riemannian surfaces or curves in the plane minus a set of obstacles. However, so far, efficiently computable similarity measures for curves on general surfaces remain elusive. This paper aims at developing a natural curve similarity measure that can be easily extended and computed for curves on general orientable 2-manifolds. Specifically, we measure similarity between homotopic curves based on how hard it is to deform one curve into the other one continuously, and define this \"hardness\" as the minimum possible surface area swept by a homotopy between the curves. We consider cases where curves are embedded in the plane or on a triangulated orientable surface with genus @math , and we present efficient algorithms (which are either quadratic or near linear time, depending on the setting) for both cases." ] }
1612.03557
2565165987
Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplar-based learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MS-COCO Captioning benchmark and achieve the state-of-the-art performance in standard metrics.
Most of recent methods for image captioning are based on the encoder-decoder framework @cite_13 @cite_1 @cite_26 @cite_12 . This approach encodes an image using CNN and transforms the encoded representation into a caption using RNN. @cite_26 introduces a basic encoder-decoder model that extracts an image feature using GoogLeNet @cite_5 and feeds the feature as the first word into the Long Short Term Memory (LSTM) decoder. Instead of feeding the image feature into RNN, @cite_1 proposes a technique to transform the image feature and the hidden state of RNN into an intermediate embedding space. Each word is predicted based on the aggregation of the embedded features at each time step. On the other hand, image attributes are learned and the presence of the attributes, instead of an encoded image feature, is given to LSTM as an input to generate captions @cite_12 . In @cite_13 , multi-modal space between an image and a caption is learned, and an embedded image feature is fed to a language model as visual information.
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2951912364", "1811254738", "2950179405", "1527575280", "2404394533" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.", "Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems." ] }
1612.03557
2565165987
Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplar-based learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MS-COCO Captioning benchmark and achieve the state-of-the-art performance in standard metrics.
Inspired by the perceptual process of human being, attention mechanisms have been developed for a variety of tasks such as object recognition @cite_22 @cite_17 , image generation @cite_15 , semantic segmentation @cite_6 and visual question answering @cite_3 @cite_27 @cite_0 @cite_21 . In image captioning, there are two representative methods adopting a form of visual attention. Spatial attention for a word is estimated using the hidden state of LSTM at each time step in @cite_9 while the algorithm in @cite_29 computes a semantic attention on word candidates for caption, uses it to refine input and output of LSTM at each step. While these methods demonstrate the effectiveness of visual attention in image captioning, they do not consider a more direct use of captions available in training data. Our method leverages the captions in both learning and inference, exploiting them as high-level guidance for visual attention. To the best of our knowledge, the proposed method is the first work for image captioning that combines visual attention with a guidance of associated text language.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_21", "@cite_29", "@cite_3", "@cite_6", "@cite_0", "@cite_27", "@cite_15", "@cite_17" ], "mid": [ "1484210532", "2950178297", "2171810632", "2953022248", "2416885651", "2257483379", "2255577267", "2439787475", "1850742715", "2519284461" ], "abstract": [ "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "Visual question answering is fundamentally compositional in nature---a question like \"where is the dog?\" shares substructure with questions like \"what color is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.", "We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in images using an attention model, and subsequently performs binary segmentation for each highlighted region using decoder. Combining attention model, the decoder trained with segmentation annotations in different categories boosts accuracy of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-theart weakly-supervised techniques in PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset.", "We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single \"hop\" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].", "We propose a novel algorithm for visual question answering based on a recurrent deep neural network, where every module in the network corresponds to a complete answering unit with attention mechanism by itself. The network is optimized by minimizing loss aggregated from all the units, which share model parameters while receiving different information to compute attention probability. For training, our model attends to a region within image feature map, updates its memory based on the question and attended image feature, and answers the question based on its memory state. This procedure is performed to compute loss in each step. The motivation of this approach is our observation that multi-step inferences are often required to answer questions while each problem may have a unique desirable number of steps, which is difficult to identify in practice. Hence, we always make the first unit in the network solve problems, but allow it to learn the knowledge from the rest of units by backpropagation unless it degrades the model. To implement this idea, we early-stop training each unit as soon as it starts to overfit. Note that, since more complex models tend to overfit on easier questions quickly, the last answering unit in the unfolded recurrent neural network is typically killed first while the first one remains last. We make a single-step prediction for a new question using the shared model. This strategy works better than the other options within our framework since the selected model is trained effectively from all units without overfitting. The proposed algorithm outperforms other multi-step attention based approaches using a single step prediction in VQA dataset.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "We aim to localize objects in images using image-level supervision only. Previous approaches to this problem mainly focus on discriminative object regions and often fail to locate precise object boundaries. We address this problem by introducing two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization. The additive model encourages the predicted object region to be supported by its surrounding context region. The contrastive model encourages the predicted object region to be outstanding from its surrounding context region. Our approach benefits from the recent success of convolutional neural networks for object recognition and extends Fast R-CNN to weakly supervised object localization. Extensive experimental evaluation on the PASCAL VOC 2007 and 2012 benchmarks shows that our context-aware approach significantly improves weakly supervised localization and detection." ] }
1612.03412
2561585905
Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
). Here, Projection @math is still redundant. The right column shows the points after subtracting their prediction from previous projections, which causes them to fall off the manifold. (c) The projections obtained with the algorithm of @cite_19 . Here, the algorithm halts after one projection. The right column shows the points after the advection process along the manifold, which results in two clusters forming an unconnected graph. (d) The projections obtained with our non-redundant version of LEM. Our algorithm extracts a non-redundant third projection, which captures progression along the inner angle of the ring.
{ "cite_N": [ "@cite_19" ], "mid": [ "2050429304" ], "abstract": [ "Non-linear dimensionality reduction of noisy data is a challenging problem encountered in a variety of data analysis applications. Recent results in the literature show that spectral decomposition, as used for example by the Laplacian Eigenmaps algorithm, provides a powerful tool for non-linear dimensionality reduction and manifold learning. In this paper, we discuss a significant shortcoming of these approaches, which we refer to as the repeated eigendirections problem. We propose a novel approach that combines successive 1-dimensional spectral embeddings with a data advection scheme that allows us to address this problem. The proposed method does not depend on a non-linear optimization scheme; hence, it is not prone to local minima. Experiments with artificial and real data illustrate the advantages of the proposed method over existing approaches. We also demonstrate that the approach is capable of correctly learning manifolds corrupted by significant amounts of noise." ] }
1612.03412
2561585905
Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
A more sophisticated approach, suggested by Gerber et. al. @cite_19 , is to collapse the data points in the direction of the gradient of the previous projection. In this approach, the points always remain on the manifold. However, this method fails whenever a projection is a non-monotonic function of some coordinate along the manifold. This happens, for example, in the ring manifold of Fig. . In this case, the first projection extracted by LEM corresponds to @math , where @math is the outer angle of the ring. Therefore, before computing the second projection, the advection process moves the points along the @math coordinate towards the locations at which @math attains its mean value, which is @math . This causes the points with @math to collapse to @math , and the points with @math to collapse to @math . The two resulting clusters form an unconnected graph, so that LEM cannot be applied once more. An additional drawback of this method is that it requires a-priori knowledge of the manifold dimension. Furthermore, it is very computationally intensive and thus impractical for high-dimensional big data applications.
{ "cite_N": [ "@cite_19" ], "mid": [ "2050429304" ], "abstract": [ "Non-linear dimensionality reduction of noisy data is a challenging problem encountered in a variety of data analysis applications. Recent results in the literature show that spectral decomposition, as used for example by the Laplacian Eigenmaps algorithm, provides a powerful tool for non-linear dimensionality reduction and manifold learning. In this paper, we discuss a significant shortcoming of these approaches, which we refer to as the repeated eigendirections problem. We propose a novel approach that combines successive 1-dimensional spectral embeddings with a data advection scheme that allows us to address this problem. The proposed method does not depend on a non-linear optimization scheme; hence, it is not prone to local minima. Experiments with artificial and real data illustrate the advantages of the proposed method over existing approaches. We also demonstrate that the approach is capable of correctly learning manifolds corrupted by significant amounts of noise." ] }
1612.03618
2566679709
One of the first steps to perform most of the software maintenance activities, such as updating features or fixing bugs, is to have a relatively good understanding of the program's source code which is often written by other developers. A code summary is a description about a program's entities (e.g., its methods) which helps developers have a better comprehension of the code in a shorter period of time. However, generating code summaries can be a challenging task. To mitigate this problem, in this article, we introduce CrowdSummarizer, a code summarization platform that benefits from the concepts of crowdsourcing, gamification, and natural language processing to automatically generate a high level summary for the methods of a Java program. We have implemented CrowdSummarizer as an Eclipse plugin together with a web-based code summarization game that can be played by the crowd. The results of two empirical studies that evaluate the applicability of the approach and the quality of generated summaries indicate that CrowdSummarizer is effective in generating quality results.
Describing high level actions using topic modelling approaches are also considered in @cite_3 and @cite_17 . Eddy et al. @cite_3 replicated and expanded the approach proposed by Haiduc et al. @cite_4 by introducing Hirarchical Pachinko Allocation Model(hPAM), as a new topic modelling technique. McBurney et al. @cite_17 proposed an approach to extract topics in source code using program's call graph and a topic modelling approach named Hierarchical Document Topic Model (HDTM). A recent work in @cite_15 helps programmers understand the role a method plays in a program. In particular, it summarizes the context of a method, like how it is called or its output is used. Consequently, it does not summarize the method itself. There are other works which generate descriptions for other software artifacts, such as bug reports @cite_33 and execution traces @cite_34 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_3", "@cite_15", "@cite_34", "@cite_17" ], "mid": [ "2133333349", "2110374486", "2018844270", "2294980783", "2129879174", "1995423397" ], "abstract": [ "During maintenance developers cannot read the entire code of large systems. They need a way to get a quick understanding of source code entities (such as, classes, methods, packages, etc.), so they can efficiently identify and then focus on the ones related to their task at hand. Sometimes reading just a method header or a class name does not tell enough about its purpose and meaning, while reading the entire implementation takes too long. We study a solution which mitigates the two approaches, i.e., short and accurate textual descriptions that illustrate the software entities without having to read the details of the implementation. We create such descriptions using techniques from automatic text summarization. The paper presents a study that investigates the suitability of various such techniques for generating source code summaries. The results indicate that a combination of text summarization techniques is most appropriate for source code summarization and that developers generally agree with the summaries produced.", "Many software artifacts are created, maintained and evolved as part of a software development project. As software developers work on a project, they interact with existing project artifacts, performing such activities as reading previously filed bug reports in search of duplicate reports. These activities often require a developer to peruse a substantial amount of text. In this paper, we investigate whether it is possible to summarize software artifacts automatically and effectively so that developers could consult smaller summaries instead of entire artifacts. To provide focus to our investigation, we consider the generation of summaries for bug reports. We found that existing conversation-based generators can produce better results than random generators and that a generator trained specifically on bug reports can perform statistically better than existing conversation-based generators. We demonstrate that humans also find these generated summaries reasonable indicating that summaries might be used effectively for many tasks.", "During software evolution a developer must investigate source code to locate then understand the entities that must be modified to complete a change task. To help developers in this task, proposed text summarization based approaches to the automatic generation of class and method summaries, and via a study of four developers, they evaluated source code summaries generated using their techniques. In this paper we propose a new topic modeling based approach to source code summarization, and via a study of 14 developers, we evaluate source code summaries generated using the proposed technique. Our study partially replicates the original study by in that it uses the objects, the instruments, and a subset of the summaries from the original study, but it also expands the original study in that it includes more subjects and new summaries. The results of our study both support the findings of the original and provide new insights into the processes and criteria that developers use to evaluate source code summaries. Based on our results, we suggest future directions for research on source code summarization.", "Source code summarization is the task of creating readable summaries that describe the functionality of software. Source code summarization is a critical component of documentation generation, for example as Javadocs formed from short paragraphs attached to each method in a Java program. At present, a majority of source code summarization is manual, in that the paragraphs are written by human experts. However, new automated technologies are becoming feasible. These automated techniques have been shown to be effective in select situations, though a key weakness is that they do not explain the source code's context. That is, they can describe the behavior of a Java method, but not why the method exists or what role it plays in the software. In this paper, we propose a source code summarization technique that writes English descriptions of Java methods by analyzing how those methods are invoked. We then performed two user studies to evaluate our approach. First, we compared our generated summaries to summaries written manually by experts. Then, we compared our summaries to summaries written by a state-of-the-art automatic summarization tool. We found that while our approach does not reach the quality of human-written summaries, we do improve over the state-of-the-art summarization tool in several dimensions by a statistically-significant margin.", "In this paper, we present a semi-automatic approach for summarizing the content of large execution traces. Similar to text summarization, where abstracts can be extracted from large documents, the aim of trace summarization is to take an execution trace as input and return a summary of its main content as output. The resulting summary can then be converted into a UML sequence diagram and used by software engineers to understand the main behavioural aspects of the system. Our approach to trace summarization is based on the removal of implementation details such as utilities from execution traces. To achieve our goal, we have developed a metric based on fan-in and fan-out to rank the system components according to whether they implement key system concepts or they are mere implementation details. We applied our approach to a trace generated from an object-oriented system called Weka that initially contains 97413 method calls. We succeeded to extract a summary from this trace that contains 453 calls. According to the developers of the Weka system, the resulting summary is an adequate high-level representation of the main interactions of the traced scenario.", "In this paper, we present an emerging source code summarization technique that uses topic modeling to select keywords and topics as summaries for source code. Our approach organizes the topics in source code into a hierarchy, with more general topics near the top of the hierarchy. In this way, we present the software's highest-level functionality first, before lower-level details. This is an advantage over previous approaches based on topic models, that only present groups of related keywords without a hierarchy. We conducted a preliminary user study that found our approach selects keywords and topics that the participants found to be accurate in a majority of cases." ] }
1612.03618
2566679709
One of the first steps to perform most of the software maintenance activities, such as updating features or fixing bugs, is to have a relatively good understanding of the program's source code which is often written by other developers. A code summary is a description about a program's entities (e.g., its methods) which helps developers have a better comprehension of the code in a shorter period of time. However, generating code summaries can be a challenging task. To mitigate this problem, in this article, we introduce CrowdSummarizer, a code summarization platform that benefits from the concepts of crowdsourcing, gamification, and natural language processing to automatically generate a high level summary for the methods of a Java program. We have implemented CrowdSummarizer as an Eclipse plugin together with a web-based code summarization game that can be played by the crowd. The results of two empirical studies that evaluate the applicability of the approach and the quality of generated summaries indicate that CrowdSummarizer is effective in generating quality results.
The key difference between our approach (as well as other summarization approaches) and these existing approaches is that documentation techniques, such as JavaDocs @cite_13 work on combining short paragraphs attached to each method to create a code document and ignore the way that this short paragraphs are created (manual (i.e. human written) or automatically), while code summarization techniques generate descriptions about a piece of a code (i.e. the short paragraph itself). Indeed, the code summaries can be considered as the main components of the program document.
{ "cite_N": [ "@cite_13" ], "mid": [ "1994683471" ], "abstract": [ "This paper describes in a general way the process we went through to determine the goals, principles, audience, content and style for writing comments in source code for the Java platform at the Java Software division of Sun Microsystems. This includes how the documentation comments evolved to become the home of the Java platform API specification, and the guidelines we developed to make it practical for this document to reside in the same files as the source code." ] }
1612.03583
2950628396
Systematic literature studies have received much attention in empirical software engineering in recent years. They have become a powerful tool to collect and structure reported knowledge in a systematic and reproducible way. We distinguish systematic literature reviews to systematically analyze reported evidence in depth, and systematic mapping studies to structure a field of interest in a broader, usually quantified manner. Due to the rapidly increasing body of knowledge in software engineering, researchers who want to capture the published work in a domain often face an extensive amount of publications, which need to be screened, rated for relevance, classified, and eventually analyzed. Although there are several guidelines to conduct literature studies, they do not yet help researchers coping with the specific difficulties encountered in the practical application of these guidelines. In this article, we present an experience-based guideline to aid researchers in designing systematic literature studies with special emphasis on the data collection and selection procedures. Our guideline aims at providing a blueprint for a practical and pragmatic path through the plethora of currently available practices and deliverables capturing the dependencies among the single steps. The guideline emerges from various mapping studies and literature reviews conducted by the authors and provides recommendations for the general study design, data collection, and study selection procedures. Finally, we share our experiences and lessons learned in applying the different practices of the proposed guideline.
The search and selection procedures also include the definition and use of inclusion and exclusion criteria. However, @cite_38 found only five out of 10 guidelines explicitly addressing this topic, but there was so far no attempt to craft a set of standard in- exclusion criteria. Similarly to standard research questions, standard data collection workflows, and standard study selection procedures, we have proposed a set of standard inclusion and exclusion criteria to support a quick start of the study and to lay the foundation for the development of further study-specific criteria.
{ "cite_N": [ "@cite_38" ], "mid": [ "1999798506" ], "abstract": [ "Abstract Context Systematic mapping studies are used to structure a research area, while systematic reviews are focused on gathering and synthesizing evidence. The most recent guidelines for systematic mapping are from 2008. Since that time, many suggestions have been made of how to improve systematic literature reviews (SLRs). There is a need to evaluate how researchers conduct the process of systematic mapping and identify how the guidelines should be updated based on the lessons learned from the existing systematic maps and SLR guidelines. Objective To identify how the systematic mapping process is conducted (including search, study selection, analysis and presentation of data, etc.); to identify improvement potentials in conducting the systematic mapping process and updating the guidelines accordingly. Method We conducted a systematic mapping study of systematic maps, considering some practices of systematic review guidelines as well (in particular in relation to defining the search and to conduct a quality assessment). Results In a large number of studies multiple guidelines are used and combined, which leads to different ways in conducting mapping studies. The reason for combining guidelines was that they differed in the recommendations given. Conclusion The most frequently followed guidelines are not sufficient alone. Hence, there was a need to provide an update of how to conduct systematic mapping studies. New guidelines have been proposed consolidating existing findings." ] }
1612.03639
2565044528
Social graphs, representing online friendships among users, are one of the fundamental types of data for many applications, such as recommendation, virality prediction and marketing in social media. However, this data may be unavailable due to the privacy concerns of users, or kept private by social network operators, which makes such applications difficult. Inferring users' interests and discovering users' connections through their shared multimedia content has attracted more and more attention in recent years. This paper proposes a Gaussian relational topic model for connection discovery using user shared images in social media. The proposed model not only models users' interests as latent variables through their shared images, but also considers the connections between users as a result of their shared images. It explicitly relates user shared images to user connections in a hierarchical, systematic and supervisory way and provides an end-to-end solution for the problem. This paper also derives efficient variational inference and learning algorithms for the posterior of the latent variables and model parameters. It is demonstrated through experiments with over 200k images from Flickr that the proposed method significantly outperforms the methods in previous works.
One content-based approach to discover connections through images is to generate a label for an image based on its visual elements @cite_2 @cite_9 . However, determining the relationship between the visual elements and the label is not a trivial task because the same object can be visually different among images, and denoting each image by a single label loses a lot of information. @cite_6 and @cite_4 propose to first generate labels for user shared images by clustering and describe users' characteristics by counting the occurrences of image clusters appearing in their collections. The prediction is then made through analyzing the similarities among the users' histograms. Both methods assume each image contains one and only one topic, neglecting potentially rich information. On the other hand, @cite_0 and @cite_8 represent users' interests by summing the feature vectors of their images, achieving a similar effect to the other methods. However, all these methods do not actually connect shared images to the links between users, or only connect shared images to links, and not the other way around.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2281238579", "2018749988", "2062258476", "1557969627", "2210558857", "2167760907" ], "abstract": [ "Social graphs, representing the online friendships among users, are one of the most fundamental types of data for many social media applications, such as recommendation, virality prediction and marketing. However, this data may be unavailable due to the privacy concerns of users, or kept privately by social network operators, which makes such applications difficult. One of the possible solutions to discover user connections is to use shared content, especially images on online social networks, such as Flickr and Instagram. This paper investigates how non-user generated labels annotated on shared images can be used for connection discovery with different color-based and feature-based methods. The label distribution is computed to represent users, and followee follower relationships are recommended based on the distribution similarity. These methods are evaluated with over 200k images from Flickr and it is proven that with non-user generated labels, user connections can be discovered, regardless of the method used. Feature-based methods are also proven to be 95 better than color-based methods, and 65 better than tag-based methods.", "Modeling continuous social strength rather than conventional binary social ties in the social network can lead to a more precise and informative description of social relationship among people. In this paper, we study the problem of social strength modeling (SSM) for the users in a social media community, who are typically associated with diverse form of data. In particular, we take Flickr---the most popular online photo sharing community---as an example, in which users are sharing their experiences through substantial amounts of multimodal contents (e.g., photos, tags, geo-locations, friend lists) and social behaviors (e.g., commenting and joining interest groups). Such heterogeneous data in Flickr bring opportunities yet challenges to the research community for SSM. One of the key issues in SSM is how to effectively explore the heterogeneous data and how to optimally combine them to measure the social strength. In this paper, we present a kernel-based learning to rank framework for inferring the social strength of Flickr users, which involves two learning stages. The first stage employs a kernel target alignment algorithm to integrate the heterogeneous data into a holistic similarity space. With the learned kernel, the second stage rectifies the pair-wise learning to rank approach to estimating the social strength. By learning the social strength graph, we are able to conduct collaborative recommendation and collective classification. The promising results show that the learning-based approach is effective for SSM. Despite being focused on Flickr, our technique can be applied to model social strength of users in any other social media community.", "Social image tagging is becoming increasingly popular with the development of social website, where images are annotated with arbitrary keywords called tags. Most of present image tagging approaches are mainly based on the visual similarity or mapping between visual feature and tags. However, in the social media environment, images are always associated with multi-type of object information (i.e., visual content, tags, and user contact information) which makes this task more challenging. In this paper, we propose to fuse multi-type of information to tag social image. Specifically, we model social image tagging as a ''ranking and reinforcement'' problem, and a novel graph-based reinforcement algorithm for interrelated multi-type objects is proposed. When a user issue a tagging request for a query image, a candidate tag set is derived and a set of friends of the query user is selected. Then a graph which contains three types of objects (i.e., visual features of the query image, candidate tags, and friend users) is constructed, and each type of objects are initially ranked based on their weight and intra-relation. Finally, candidate tags are re-ranked by our graph-based reinforcement algorithm which takes into consideration both inter-relation with visual features and friend users, and the top ranked tags are saved. Experiments on real-life dataset demonstrate that our algorithm significantly outperforms state-of-the-art algorithms.", "Billions of user-shared images are generated by individuals in many social networks today, and this particular form of user data is widely accessible to others due to the nature of online social sharing. When user social graphs are only accessible to exclusive parties, these user-shared images are proved to be an easier and effective alternative to discover user connections. This work investigated over 360 000 user shared images from two social networks, Skyrock and 163 Weibo, in which 3 million follower followee relationships are involved. It is observed that the shared images from users with a follower followee relationship show relatively higher similarities . A multimedia big data system that utilizes this observed phenomenon is proposed as an alternative to user- generated tags and social graphs for follower followee recommendation and gender identification. To the best of our knowledge, this is the first attempt in this field to prove and formulate such a phenomenon for mass user-shared images along with more practical prediction methods. These findings are useful for information or services recommendations in any social network with intensive image sharing, as well as for other interesting personalization applications, particularly when there is no access to those exclusive user social graphs.", "User preference profiling is an important task in modern online social networks (OSN). With the proliferation of image-centric social platforms, such as Pinterest, visual contents have become one of the most informative data streams for understanding user preferences. Traditional approaches usually treat visual content analysis as a general classification problem where one or more labels are assigned to each image. Although such an approach simplifies the process of image analysis, it misses the rich context and visual cues that play an important role in people's perception of images. In this paper, we explore the possibilities of learning a user's latent visual preferences directly from image contents. We propose a distance metric learning method based on Deep Convolutional Neural Networks (CNN) to directly extract similarity information from visual contents and use the derived distance metric to mine individual users' fine-grained visual preferences. Through our preliminary experiments using data from 5,790 Pinterest users, we show that even for the images within the same category, each user possesses distinct and individually-identifiable visual preferences that are consistent over their lifetime. Our results underscore the untapped potential of finer-grained visual preference profiling in understanding users' preferences.", "Large collaborative datasets offer the challenging opportunity of creating systems capable of extracting knowledge in the presence of noisy data. In this work we explore the ability to automatically learn tag semantics by mining a global georeferenced image collection crawled from Flickr with the aim of improving an automatic annotation system. We are able to categorize sets of tags as places, landmarks, and visual descriptors. By organizing our dataset of more than 1.69 million images using a quadtree we can efficiently find geographic areas with sufficient density to provide useful results for place and landmark extraction. Precision-recall curves for our techniques compared with previous existing work used to identify place tags and manual groundtruth landmark annotation show the merit of our methods applied on a world scale." ] }
1612.03236
2566793769
Co-localization is the problem of localizing objects of the same class using only the set of images that contain them. This is a challenging task because the object detector must be built without negative examples that can lead to more informative supervision signals. The main idea of our method is to cluster the feature space of a generically pre-trained CNN, to find a set of CNN features that are consistently and highly activated for an object category, which we call category-consistent CNN features. Then, we propagate their combined activation map using superpixel geodesic distances for co-localization. In our first set of experiments, we show that the proposed method achieves state-of-the-art performance on three related benchmarks: PASCAL 2007, PASCAL-2012, and the Object Discovery dataset. We also show that our method is able to detect and localize truly unseen categories, on six held-out ImageNet categories with accuracy that is significantly higher than previous state-of-the-art. Our intuitive approach achieves this success without any region proposals or object detectors and can be based on a CNN that was pre-trained purely on image classification tasks without further fine-tuning.
One challenge of co-localization is to define the criteria for discovering the objects without any negative examples. To fill the gap, state-of-the-art co-localization methods such as @cite_3 @cite_36 @cite_27 @cite_18 employ object proposals as part of their object discovery and co-localization pipelines. use the measure of objectness @cite_0 to generate multiple bounding boxes for each image, followed by an objective function to simultaneously optimize the image-level labels and box-level labels. Such settings allow the use of a discriminative cost function @cite_28 . This is also used in the work of co-localization on video frames @cite_18 . 's method also starts from object proposals, sharing the same spirit with deformable part models @cite_34 where objects are discovered and localized by matching common object parts. Most recently, study the confidence score distribution of a supervised object detector over the set of object proposals to define an objective function, that learns a common object detector with similar confidence score distribution. All the aforementioned methods heavily depend on the quality of object proposals.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_28", "@cite_3", "@cite_0", "@cite_27", "@cite_34" ], "mid": [ "95926497", "1966601141", "2086052791", "2298532145", "2066624635", "1919709169", "2168356304" ], "abstract": [ "In this paper, we tackle the problem of performing efficient co-localization in images and videos. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images or videos. Building upon recent state-of-the-art methods, we show how we are able to naturally incorporate temporal terms and constraints for video co-localization into a quadratic programming framework. Furthermore, by leveraging the Frank-Wolfe algorithm (or conditional gradient), we show how our optimization formulations for both images and videos can be reduced to solving a succession of simple integer programs, leading to increased efficiency in both memory and speed. To validate our method, we present experimental results on the PASCAL VOC 2007 dataset for images and the YouTube-Objects dataset for videos, as well as a joint combination of the two.", "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.", "Purely bottom-up, unsupervised segmentation of a single image into foreground and background regions remains a challenging task for computer vision. Co-segmentation is the problem of simultaneously dividing multiple images into regions (segments) corresponding to different object classes. In this paper, we combine existing tools for bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition. These two sets of techniques are used within a discriminative clustering framework: the goal is to assign foreground background labels jointly to all images, so that a supervised classifier trained with these labels leads to maximal separation of the two classes. In practice, we obtain a combinatorial optimization problem which is relaxed to a continuous convex optimization problem, that can itself be solved efficiently for up to dozens of images. We illustrate the proposed method on images with very similar foreground objects, as well as on more challenging problems with objects with higher intra-class variations.", "Given a set of images containing objects from the same category, the task of image co-localization is to identify and localize each instance. This paper shows that this problem can be solved by a simple but intriguing idea, that is, a common object detector can be learnt by making its detection confidence scores distributed like those of a strongly supervised detector. More specifically, we observe that given a set of object proposals extracted from an image that contains the object of interest, an accurate strongly supervised object detector should give high scores to only a small minority of proposals, and low scores to most of them. Thus, we devise an entropy-based objective function to enforce the above property when learning the common object detector. Once the detector is learnt, we resort to a segmentation approach to refine the localization. We show that despite its simplicity, our approach outperforms state-of-the-arts.", "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "This paper addresses unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes. The setting of this problem is fully unsupervised, without even image-level annotations or any assumption of a single dominant class. This is far more general than typical colocalization, cosegmentation, or weakly-supervised localization tasks. We tackle the discovery and localization problem using a part-based region matching approach: We use off-the-shelf region proposals to form a set of candidate bounding boxes for objects and object parts. These regions are efficiently matched across images using a probabilistic Hough transform that evaluates the confidence for each candidate correspondence considering both appearance and spatial consistency. Dominant objects are discovered and localized by comparing the scores of candidate regions and selecting those that stand out over other regions containing them. Extensive experimental evaluations on standard benchmarks demonstrate that the proposed approach significantly outperforms the current state of the art in colocalization, and achieves robust object discovery in challenging mixed-class datasets.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1612.02962
2951757466
Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k estimation problems, which are fundamental in network monitoring. We demonstrate space reductions compared to the alternatives by a factor of up to 32 on real packet traces and up to 128 on heavy-tailed workloads. For top-k identification, RAP exhibits memory savings by a factor of between 4 and 64 depending on the skew of the workload. These empirical results are backed by formal analysis, indicating the asymptotic space improvement of our probabilistic admission approach. Additionally, we present d-Way RAP, a hardware friendly variant of RAP that empirically maintains its space and accuracy benefits.
Counter based algorithms are usually designed for software implementations and maintain a table of monitored items. The differences between these algorithms lie in the question of admission and eviction of entries to and from the table. From a networking perspective, counter based algorithms maintain an explicit flow to counter mapping for monitored items. For a stream of @math events and an accuracy parameter @math , the goal is to approximate a given flow's frequency to within an additive error of @math . For this task, @math counters are required @cite_17 , and this is achieved by some of the algorithms below.
{ "cite_N": [ "@cite_17" ], "mid": [ "2795088498" ], "abstract": [ "We propose an integrated approach for solving both problems of finding the most popular k elements, and finding frequent elements in a data stream. Our technique is efficient and exact if the alphabet under consideration is small. In the more practical large alphabet case, our solution is space efficient and reports both top-k and frequent elements with tight guarantees on errors. For general data distributions, our top-k algorithm can return a set of k' elements, where k' ≃ k, which are guaranteed to be the top-k' elements; and we use minimal space for calculating frequent elements. For realistic Zipfian data, our space requirement for the frequent elements problem decreases dramatically with the parameter of the distribution; and for top-k queries, we ensure that only the top-k elements, in the correct order, are reported. Our experiments show significant space reductions with no loss in accuracy." ] }
1612.02962
2951757466
Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k estimation problems, which are fundamental in network monitoring. We demonstrate space reductions compared to the alternatives by a factor of up to 32 on real packet traces and up to 128 on heavy-tailed workloads. For top-k identification, RAP exhibits memory savings by a factor of between 4 and 64 depending on the skew of the workload. These empirical results are backed by formal analysis, indicating the asymptotic space improvement of our probabilistic admission approach. Additionally, we present d-Way RAP, a hardware friendly variant of RAP that empirically maintains its space and accuracy benefits.
In @cite_12 @cite_20 , the LSB bits of counters are stored in SRAM and the MSB in DRAM. This way, the space allocated for each flow in SRAM is small. However, the SRAM counters have to periodically be synchronized with the DRAM counters, which increases the contention on the memory bus. Further, estimating a flow's frequency requires accessing DRAM and therefore cannot be used for online network monitoring.
{ "cite_N": [ "@cite_20", "@cite_12" ], "mid": [ "2074529643", "2108909367" ], "abstract": [ "A network device stores and updates statistics counters. Using an optimal counter management algorithm minimizes required SRAM size and ensures correct line-rate operation for many counters. We use a well known architecture for storing and updating statistics counters. This approach maintains smaller-size counters in fast (potentially on-chip) SRAM, while maintaining full-size counters in a large, slower DRAM. Our goal is to ensure that the system always correctly maintains counter values at line rate. An optimal counter management algorithm (CMA) minimizes the required SRAM size while ensuring correct line-rate operation for a large number of counters.", "Internet routers and switches need to maintain millions of (e.g., per prefix) counters at up to OC-768 speeds that are essential for traffic engineering. Unfortunately, the speed requirements require the use of large amounts of expensive SRAM memory. [1]introduced a cheaper statistics counter architecture that uses a much smaller amount of SRAM by using the SRAM as a cache together with a (cheap) backing DRAM that stores the complete counters. Counters in SRAM are periodically updated to the DRAM before they overflow under the control of a counter management algorithm. [1] also devised a counter management algorithm called LCF that they prove uses an optimal amount of SRAM. Unfortunately, it is difficult to implement LCF at high speeds because it requires sorting to evict the largest counter in the SRAM. This paper removes this bottleneck in [1] by proposing a counter management algorithm called LR(T) (Largest Recent with thresh-old T) that avoids sorting by only keeping a bitmap that tracks counters that are larger than threshold T. This allows LR(T) to be practically realizable using only at most 2 bits extra per counter and a simple pipelined data structure. Despite this, we show through a formal analysis, that for a particular value of the threshold T, the LR(T) requires an optimal amount of SRAM, matching LCF. Further,we also describe an implementation, based on a novel data structure called aggregated bitmap, that allows the LR(T) algorithm to be realized at line rates." ] }
1612.02962
2951757466
Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k estimation problems, which are fundamental in network monitoring. We demonstrate space reductions compared to the alternatives by a factor of up to 32 on real packet traces and up to 128 on heavy-tailed workloads. For top-k identification, RAP exhibits memory savings by a factor of between 4 and 64 depending on the skew of the workload. These empirical results are backed by formal analysis, indicating the asymptotic space improvement of our probabilistic admission approach. Additionally, we present d-Way RAP, a hardware friendly variant of RAP that empirically maintains its space and accuracy benefits.
Brick @cite_15 uses an efficient encoding in order to reduce the number of bits allocated per counter. Brick enables storing more counters, under the assumption that the total value of increments is known in advance. Brick is most effective when there are many very small flows.
{ "cite_N": [ "@cite_15" ], "mid": [ "2125472096" ], "abstract": [ "In this paper, we present an exact active statistics counter architecture called BRICK (Bucketized Rank Indexed Counters) that can efficiently store per-flow variable-width statistics counters entirely in SRAM while supporting both fast updates and lookups (e.g., 40 Gb s line rates). BRICK exploits statistical multiplexing by randomly bundling counters into small fixed-size buckets and supports dynamic sizing of counters by employing an innovative indexing scheme called rank-indexing. Experiments with Internet traces show that our solution can indeed maintain large arrays of exact active statistics counters with moderate amounts of SRAM." ] }
1612.02962
2951757466
Network management protocols often require timely and meaningful insight about per flow network traffic. This paper introduces Randomized Admission Policy (RAP) - a novel algorithm for the frequency and top-k estimation problems, which are fundamental in network monitoring. We demonstrate space reductions compared to the alternatives by a factor of up to 32 on real packet traces and up to 128 on heavy-tailed workloads. For top-k identification, RAP exhibits memory savings by a factor of between 4 and 64 depending on the skew of the workload. These empirical results are backed by formal analysis, indicating the asymptotic space improvement of our probabilistic admission approach. Additionally, we present d-Way RAP, a hardware friendly variant of RAP that empirically maintains its space and accuracy benefits.
Estimators use fixed size small counters in order to represent large numbers. These methods trade precision for space and allow more counters to be contained in SRAM. This idea was first introduced by @cite_40 and was adapted to networking devices @cite_36 @cite_3 @cite_39 @cite_0 . The downside of estimators is that they require storing a flow-to-counter mapping for every flow, a requirement that has many overheads. Sampling techniques are another alternative that trades accuracy for space. Unfortunately, these methods can only monitor large flows that are frequent enough to be sampled @cite_13 @cite_5 .
{ "cite_N": [ "@cite_36", "@cite_3", "@cite_39", "@cite_0", "@cite_40", "@cite_5", "@cite_13" ], "mid": [ "2101583649", "", "2085174050", "1495100589", "2064710146", "", "1994663326" ], "abstract": [ "The need for efficient counter architecture has arisen for the following two reasons. Firstly, a number of data streaming algorithms and network management applications require a large number of counters in order to identify important traffic characteristics. And secondly, at high speeds, current memory devices have significant limitations in terms of speed (DRAM) and size (SRAM). For some applications no information on counters is needed on a per-packet basis and several methods have been proposed to handle this problem with low SRAM memory requirements. However, for a number of applications it is essential to have the counter information on every packet arrival. In this paper we propose two, computationally and memory efficient, randomized algorithms for approximating the counter values. We prove that proposed estimators are unbiased and give variance bounds. A case study on multistage filters (MSF) over the real Internet traces shows a significant improvement by using the active counters architecture.", "", "Network management applications require large numbers of counters in order to collect traffic characteristics for each network flow. However, these counters often barely fit into on-chip SRAM memories. Past papers have proposed using counter estimators instead, thus trading off counter precision for a lower number of bits. But these estimators do not achieve optimal estimation error, and cannot always scale to arbitrary counter values.", "Measurement capabilities are essential for a variety of network applications, such as load balancing, routing, fairness and intrusion detection. These capabilities require large counter arrays in order to monitor the traffic of all network flows. While commodity SRAM memories are capable of operating at line speed, they are too small to accommodate large counter arrays. Previous works suggested estimators, which trade precision for reduced space. However, in order to accurately estimate the largest counter, these methods compromise the accuracy of the rest of the counters. In this work we present a closed form representation of the optimal estimation function. We then introduce Independent Counter Estimation Buckets (ICE-Buckets), a novel algorithm that improves estimation accuracy for all counters. This is achieved by separating the flows to buckets and configuring the optimal estimation function according to each bucket's counter scale. We prove an improved upper bound on the relative error and demonstrate an accuracy improvement of up to 57 times on real Internet packet traces.", "It is possible to use a small counter to keep approximate counts of large numbers. The resulting expected error can be rather precisely controlled. An example is given in which 8-bit counters (bytes) are used to keep track of as many as 130,000 events with a relative error which is substantially independent of the number n of events. This relative error can be expected to be 24 percent or less 95 percent of the time (i.e. s = n 8). The techniques could be used to advantage in multichannel counting hardware or software used for the monitoring of experiments or processes.", "", "Timely detection of changes in traffic load is critical for initiating appropriate traffic engineering mechanisms. Accurate measurement of traffic is essential since the efficacy of change detection depends on the accuracy of traffic estimation. However, precise traffic measurement involves inspecting every packet traversing a link, resulting in significant overhead, particularly on high speed links. Sampling techniques for traffic load estimation are proposed as a way to limit the measurement overhead. In this paper, we address the problem of bounding sampling error within a pre-specified tolerance level and propose an adaptive random sampling technique that determines the minimum sampling probability adaptively according to traffic dynamics. Using real network traffic traces, we show that the proposed adaptive random sampling technique indeed produces the desired accuracy, while also yielding significant reduction in the amount of traffic samples. We also investigate the impact of sampling errors on the performance of load change detection." ] }
1612.03225
2564159105
Decision trees have been a very popular class of predictive models for decades due to their interpretability and good performance on categorical features. However, they are not always robust and tend to overfit the data. Additionally, if allowed to grow large, they lose interpretability. In this paper, we present a novel mixed integer programming formulation to construct optimal decision trees of specified size. We take special structure of categorical features into account and allow combinatorial decisions (based on subsets of values of such a feature) at each node. We show that very good accuracy can be achieved with small trees using moderately-sized training sets. The optimization problems we solve are easily tractable with modern solvers.
In the 1990s several papers considered optimization formulations for optimal decision tree learning, but deliberately relaxed the inherently integer nature of the problem. In particular, in @cite_8 , a large-scale linear optimization problem, which can be viewed as a relaxation, is solved to global optimality via a specialized tabu search method over the extreme points of the linear polytope. In @cite_2 , a similar formulation is used, but this time combined with the use of support-vector machine techniques such as generalized kernels for multivariate decisions, yielding a convex nonlinear optimization problem which admits a favorable dual structure. More recent work @cite_5 has employed a stochastic gradient method to minimize a continuous upper bound on misclassification error made by a deep decision tree. None of these methods, though, guarantee optimal decision trees, since they do not consider the exact (integer) formulations, such as the one discussed in this paper.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "2179503016", "2109638028", "" ], "abstract": [ "Decision trees and randomized forests are widely used in computer vision and machine learning. Standard algorithms for decision tree induction optimize the split functions one node at a time according to some splitting criteria. This greedy procedure often leads to suboptimal trees. In this paper, we present an algorithm for optimizing the split functions at all levels of the tree jointly with the leaf parameters, based on a global objective. We show that the problem of finding optimal linear-combination (oblique) splits for decision trees is related to structured prediction with latent variables, and we formulate a convex-concave upper bound on the tree's empirical loss. Computing the gradient of the proposed surrogate objective with respect to each training exemplar is O(d2), where d is the tree depth, and thus training deep trees is feasible. The use of stochastic gradient descent for optimization enables effective training with large datasets. Experiments on several classification benchmarks demonstrate that the resulting non-greedy decision trees outperform greedy decision tree baselines.", "Key ideas from statistical learning theory and support vector machines are generalized to decision trees. A support vector machine is used for each decision in the tree. The \"optimal\" decision tree is characterized, and both a primal and dual space formulation for constructing the tree are proposed. The result is a method for generating logically simple decision trees with multivariate linear, nonlinear or linear decisions. By varying the kernel function used, the decisions may consist of linear threshold units, polynomials, sigmoidal neural networks, or radial basis function networks. The preliminary results indicate that the method produces simple trees that generalize well with respect to other decision tree algorithms and single support vector machines.", "" ] }
1612.03225
2564159105
Decision trees have been a very popular class of predictive models for decades due to their interpretability and good performance on categorical features. However, they are not always robust and tend to overfit the data. Additionally, if allowed to grow large, they lose interpretability. In this paper, we present a novel mixed integer programming formulation to construct optimal decision trees of specified size. We take special structure of categorical features into account and allow combinatorial decisions (based on subsets of values of such a feature) at each node. We show that very good accuracy can be achieved with small trees using moderately-sized training sets. The optimization problems we solve are easily tractable with modern solvers.
We would now like to remark on other relevant uses of integer optimization in classification settings. In particular, @cite_7 considered the problem of learning optimal or's of and's'', which fits into the problem of learning optimal disjunctive normal forms (DNFs), where optimality is measured by a trade-off between the misclassification rate and the number of literals that appear in the or of and's''. The work in @cite_7 remarks on the relationship between this problem and learning optimal decision trees. In @cite_7 , for the sake of computational efficiency, the authors ultimately resort to optimally selecting from a subset of candidate suboptimal DNFs learned by heuristic means rather than solving their proposed mixed-integer optimization problem. Similarly, @cite_12 proposes learning DNF-like rules via integer optimization, and propose a formulation that can be viewed as boolean compressed sensing, lending theoretical credibility to solving a linear programming relaxation of their integer problem. Another integer model that minimizes misclassification error by choosing general partitions in feature space was proposed in @cite_13 , but when solving the model, global optimality certificates were not easily obtained on moderately-sized classification datasets, and the learned partition classifiers rarely outperformed CART, according to the overlapping author in @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_12", "@cite_7" ], "mid": [ "", "2164092299", "134253518", "2257638929" ], "abstract": [ "", "Motivated by the significant advances in integer optimization in the past decade, we introduce mixed-integer optimization methods to the classical statistical problems of classification and regression and construct a software package called CRIO (classification and regression via integer optimization). CRIO separates data points into different polyhedral regions. In classification each region is assigned a class, while in regression each region has its own distinct regression coefficients. Computational experimentations with generated and real data sets show that CRIO is comparable to and often outperforms the current leading methods in classification and regression. We hope that these results illustrate the potential for significant impact of integer optimization methods on computational statistics and data mining.", "We propose an interpretable rule-based classification system based on ideas from Boolean compressed sensing. We represent the problem of learning individual conjunctive clauses or individual disjunctive clauses as a Boolean group testing problem, and apply a novel linear programming relaxation to find solutions. We derive results for exact rule recovery which parallel the conditions for exact recovery of sparse signals in the compressed sensing literature: although the general rule recovery problem is NP-hard, under some conditions on the Boolean 'sensing' matrix, the rule can be recovered exactly. This is an exciting development in rule learning where most prior work focused on heuristic solutions. Furthermore we construct rule sets from these learned clauses using set covering and boosting. We show competitive classification accuracy using the proposed approach.", "Or's of And's (OA) models are comprised of a small number of disjunctions of conjunctions, also called disjunctive normal form. An example of an OA model is as follows: If ( @math blue' AND @math middle') OR ( @math yellow'), then predict @math , else predict @math . Or's of And's models have the advantage of being interpretable to human experts, since they are a set of conditions that concisely capture the characteristics of a specific subset of data. We present two optimization-based machine learning frameworks for constructing OA models, Optimized OA (OOA) and its faster version, Optimized OA with Approximations (OOAx). We prove theoretical bounds on the properties of patterns in an OA model. We build OA models as a diagnostic screening tool for obstructive sleep apnea, that achieves high accuracy with a substantial gain in interpretability over other methods." ] }
1612.03094
2567215512
Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze across views by predicting where a particular person is looking throughout a scene. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one view with a person in it and a second view of the scene, our model estimates a density for gaze location in the second view. A key aspect of our approach is an end-to-end model that solves the following sub-problems: saliency, gaze pose, and geometric relationships between views. Although our model is supervised only with gaze, we show that the model learns to solve these subproblems automatically without supervision. Experiments suggest that our approach follows gaze better than standard baselines and produces plausible results for everyday situations.
This paper builds upon a previous gaze-following model for static images @cite_20 . However, the previous work focuses only on cases where a person, within the image, is looking at another object in the same image. In this work, we remove this restriction and extend gaze following to cases where a person may be looking outside the current view. The model proposed in this paper deals with the situation where the person is looking at another view of the scene.
{ "cite_N": [ "@cite_20" ], "mid": [ "2184540135" ], "abstract": [ "Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this paper, we propose a deep neural network-based approach for gaze-following and a new benchmark dataset, GazeFollow, for thorough evaluation. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Our deep network is able to discover how to extract head pose and gaze orientation, and to select objects in the scene that are in the predicted line of sight and likely to be looked at (such as televisions, balls and food). The quantitative evaluation shows that our approach produces reliable results, even when viewing only the back of the head. While our method outperforms several baseline approaches, we are still far from reaching human performance on this task. Overall, we believe that gaze-following is a challenging and important problem that deserves more attention from the community." ] }
1612.03094
2567215512
Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze across views by predicting where a particular person is looking throughout a scene. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one view with a person in it and a second view of the scene, our model estimates a density for gaze location in the second view. A key aspect of our approach is an end-to-end model that solves the following sub-problems: saliency, gaze pose, and geometric relationships between views. Although our model is supervised only with gaze, we show that the model learns to solve these subproblems automatically without supervision. Experiments suggest that our approach follows gaze better than standard baselines and produces plausible results for everyday situations.
Unlike @cite_20 , we use parametrized geometry transformations that help the model to deal with the underlying geometry of the world. Neural networks have already been used to model transformations, such as in @cite_4 @cite_18 . Our work is also related to Spatial Transformers Networks @cite_21 , where a localization module generates the parameters of an affine transformation and warps the representation with bilinear interpolation. In this work, our model generates parameters of a 3D affine transformation, but the transformation is applied analytically without warping, which may be more stable. @cite_5 @cite_10 used 2D images to learn the underlying 3D structure. Similarly, we expect our model to learn the underlying 3D structure of the frame composition only using 2D images. Finally, @cite_17 provide efficient implementations for adding geometric transformations to convolutional neural networks.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_5", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "112688168", "", "2951005624", "2469266052", "2439114332", "2184540135", "2951832501" ], "abstract": [ "A viewpoint-independent description of the shape of an object can be generated by imposing a canonical frame of reference on the object and describing the spatial dispositions of the parts relative to this object-based frame. When a familiar object is in an unusual orientation, the deciding factor in the choice of the canonical object-based frame may be the fact that relative to this frame the object has a familiar shape description. This may suggest that we first hypothesise an object-based frame and then test the resultant shape description for familiarity. However, it is possible to organise the interactions between units in a parallel network so that the pattern of activity in the network simultaneously converges on a representation of the shape and a representation of the object-based frame of reference. The connections in the network are determined by the constraints inherent in the image formation process.", "", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.", "We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.", "Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this paper, we propose a deep neural network-based approach for gaze-following and a new benchmark dataset, GazeFollow, for thorough evaluation. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Our deep network is able to discover how to extract head pose and gaze orientation, and to select objects in the scene that are in the predicted line of sight and likely to be looked at (such as televisions, balls and food). The quantitative evaluation shows that our approach produces reliable results, even when viewing only the back of the head. While our method outperforms several baseline approaches, we are still far from reaching human performance on this task. Overall, we believe that gaze-following is a challenging and important problem that deserves more attention from the community.", "We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-to-end learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error." ] }
1612.03094
2567215512
Following the gaze of people inside videos is an important signal for understanding people and their actions. In this paper, we present an approach for following gaze across views by predicting where a particular person is looking throughout a scene. We collect VideoGaze, a new dataset which we use as a benchmark to both train and evaluate models. Given one view with a person in it and a second view of the scene, our model estimates a density for gaze location in the second view. A key aspect of our approach is an end-to-end model that solves the following sub-problems: saliency, gaze pose, and geometric relationships between views. Although our model is supervised only with gaze, we show that the model learns to solve these subproblems automatically without supervision. Experiments suggest that our approach follows gaze better than standard baselines and produces plausible results for everyday situations.
Although related, gaze-following and free-viewing saliency refer to different problems. In gaze-following, we predict the location of the gaze of an observer in the scene, while in saliency we predict the fixations of an external observer free-viewing the image. Some authors have used gaze to improve saliency prediction, such as in @cite_3 . Furthermore, @cite_1 showed how gaze prediction can improve state-of-the-art saliency models. Although our approach is not intended to solve video saliency neither is using video as input, we believe it is worth mentioning some works learning saliency for videos such as @cite_8 @cite_27 @cite_6 .
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_6", "@cite_3", "@cite_27" ], "mid": [ "2164869561", "2520859141", "2028771402", "1994463521", "2104996629" ], "abstract": [ "Recently, visual saliency has drawn great research interest in the field of computer vision and multimedia. Various approaches aiming at calculating visual saliency have been proposed. To evaluate these approaches, several datasets have been presented for visual saliency in images. However, there are few datasets to capture spatiotemporal visual saliency in video. Intuitively, visual saliency in video is strongly affected by temporal context and might vary significantly even in visually similar frames. In this paper, we present an extensive dataset with 7.5-hour videos to capture spatiotemporal visual saliency. The salient regions in frames sequentially sampled from these videos are manually labeled by 23 subjects and then averaged to generate the ground-truth saliency maps. We also present three metrics to evaluate competing approaches. Several typical algorithms were evaluated on the dataset. The experimental results show that this dataset is very suitable for evaluating visual saliency. We also discover some interesting findings that would be addressed in future research. Currently, the dataset is freely available online together with the source code for evaluation.", "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a fine-grained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60 of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "We present a novel tracking method for surveillance videos where the object and camera motion could be irregular. The tracker is mainly based on the mechanism of motion saliency detection, where regions of interest are detected according to the motion saliency of the moving objects. Without estimating global and local motion explicitly, we generate motion saliency by using the rank deficiency of gray scale gradient tensors within the local region neighborhoods of the image. In addition, multiple spatial features of targets are integrated in the system to assist the region-based tracking task. Experiment results on many videos suggest that the proposed method tracks moving targets efficiently even in videos with unstable cameras.", "Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor’s head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor’s head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers’ fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.", "Motion saliency is the key component for the video saliency model, and attracts great research interest. However, there is few universal predictor of motion saliency. In this paper, a novel method for generation of motion saliency is proposed, in which motion saliency map is obtained through the multi reference frames, and enhanced by spatial saliency information. The proposal can obtain more detailed information about motion feature and extract the salient object more integrally. The experiment results shows that our proposal can achieve 95.4 of the ROC area of a human based control in motion channel, whereas the classical Itti's model achieves 78.5 ." ] }
1612.03268
2949811693
We present a Deep Convolutional Neural Network architecture which serves as a generic image-to-image regressor that can be trained end-to-end without any further machinery. Our proposed architecture: the Recursively Branched Deconvolutional Network (RBDN) develops a cheap multi-context image representation very early on using an efficient recursive branching scheme with extensive parameter sharing and learnable upsampling. This multi-context representation is subjected to a highly non-linear locality preserving transformation by the remainder of our network comprising of a series of convolutions deconvolutions without any spatial downsampling. The RBDN architecture is fully convolutional and can handle variable sized images during inference. We provide qualitative quantitative results on @math diverse tasks: relighting, denoising and colorization and show that our proposed RBDN architecture obtains comparable results to the state-of-the-art on each of these tasks when used off-the-shelf without any post processing or task-specific architectural modifications.
Deep End-2-End Voxel-2-Voxel prediction @cite_21 proposed a video-to-video regressor for solving @math tasks: semantic segmentation, optical flow and colorization. Their architecture consists of a VGG @cite_50 style network on which they add branches which upsample and merge activations. Unlike Hypercolumns @cite_56 , they make the upsampling learnable and perform it in a more efficient way with weight sharing. While @cite_21 use upsampling to recover local correspondences, DnCNN @cite_49 on the other hand entirely eliminate downsampling and use a simple @math layer fully convolutional network with residual connections for handling @math tasks: denoising, super-resolution and jpeg-deblocking. Our proposed RBDN architecture can be viewed as a hybrid of @cite_49 @cite_21 . While we do utilize multi-scale activations like @cite_21 , we do so very early in the network and generate a cheap composite multi-context representation for the image. Subsequently, we pass the composite map to a linear convolution network like @cite_49 .
{ "cite_N": [ "@cite_21", "@cite_49", "@cite_50", "@cite_56" ], "mid": [ "2272842615", "2508457857", "1686810756", "1948751323" ], "abstract": [ "Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive pre-processing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline." ] }
1612.03268
2949811693
We present a Deep Convolutional Neural Network architecture which serves as a generic image-to-image regressor that can be trained end-to-end without any further machinery. Our proposed architecture: the Recursively Branched Deconvolutional Network (RBDN) develops a cheap multi-context image representation very early on using an efficient recursive branching scheme with extensive parameter sharing and learnable upsampling. This multi-context representation is subjected to a highly non-linear locality preserving transformation by the remainder of our network comprising of a series of convolutions deconvolutions without any spatial downsampling. The RBDN architecture is fully convolutional and can handle variable sized images during inference. We provide qualitative quantitative results on @math diverse tasks: relighting, denoising and colorization and show that our proposed RBDN architecture obtains comparable results to the state-of-the-art on each of these tasks when used off-the-shelf without any post processing or task-specific architectural modifications.
In the field of Face Recognition Verification, while most research focuses on extracting illumination-invariant features, is the relatively less explored alternative @cite_37 of directly making illumination corrections normalizations to an image. Traditional face relighting approaches used the Retinex @cite_47 Lambertian Reflectance @cite_10 theory and used spherical @cite_62 @cite_10 hemispherical @cite_41 harmonics, subspace-based @cite_22 @cite_29 or dictionary-based @cite_63 @cite_8 @cite_61 @cite_64 @cite_46 @cite_27 illumination corrections. Deep Lambertian Networks @cite_12 encoded lambertian models illumination corrections directly into their network architecture. This however limited the expressive power of the network, particularly due to the strong lambertian assumptions on isotropicity and absence of specular highlights, which seldom hold true for face images. In section , we show that it is possible to train a well-performing relighting model without making any lambertian assumptions using our generic RBDN architecture.
{ "cite_N": [ "@cite_61", "@cite_37", "@cite_62", "@cite_64", "@cite_22", "@cite_8", "@cite_41", "@cite_46", "@cite_29", "@cite_27", "@cite_63", "@cite_47", "@cite_10", "@cite_12" ], "mid": [ "2070653370", "2171692249", "2108428911", "2035746673", "2167765850", "2027805700", "2092179111", "56688174", "", "2018746048", "2132467081", "2164847484", "", "2952204419" ], "abstract": [ "We merge illumination normalization and component features into the framework of Sparse Representation-based Classification (SRC) for face recognition across illumination. Unlike most SRC-based face recognition which constructs a dictionary from a training set with sufficient illumination variation, the proposed method adopts a dictionary with illumination-normalized training set. This can be the first attempt to show that illumination normalization can upgrade the performance of SRC-based face recognition. To further improve the performance, we add in schemes exploiting local features, and prove its effectiveness. Experiments on FERET and Multi-PIE databases show that the performance of the proposed method can be competitive to the state of the art.", "We consider the problem of determining functions of an image of an object that are insensitive to illumination changes. We first show that for an object with Lambertian reflectance there are no discriminative functions that are invariant to illumination. This result leads as to adopt a probabilistic approach in which we analytically determine a probability distribution for the image gradient as a function of the surface's geometry and reflectance. Our distribution reveals that the direction of the image gradient is insensitive to changes in illumination direction. We verify this empirically by constructing a distribution for the image gradient from more than 20 million samples of gradients in a database of 1,280 images of 20 inanimate objects taken under varying lighting condition. Using this distribution we develop an illumination insensitive measure of image comparison and test it on the problem of face recognition.", "In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.", "In this paper, we propose a discriminative low-rank dictionary learning algorithm for sparse representation. Sparse representation seeks the sparsest coefficients to represent the test signal as linear combination of the bases in an over-complete dictionary. Motivated by low-rank matrix recovery and completion, assume that the data from the same pattern are linearly correlated, if we stack these data points as column vectors of a dictionary, then the dictionary should be approximately low-rank. An objective function with sparse coefficients, class discrimination and rank minimization is proposed and optimized during dictionary learning. We have applied the algorithm for face recognition. Numerous experiments with improved performances over previous dictionary learning methods validate the effectiveness of the proposed algorithm.", "In this paper, we introduce the concept of intrinsic illumination subspace which is based on the intrinsic images. This intrinsic illumination subspace enables an analytic generation of the illumination images under varying lighting conditions. When objects of the same class are concerned, our method allows a class-based generic intrinsic illumination subspace to be constructed in advance. We propose a lighting normalization method based on the generic intrinsic illumination subspace, which is used as a bootstrap subspace for novel images. Face recognition experiments are performed to demonstrate the effectiveness of our method", "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.", "As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates.", "This paper focuses on enhancing Sparse Representation based Classifier (SRC) in single-sample face recognition tasks under varying illumination conditions. The major contribution is two-fold: firstly, we present an interesting observation based on Lambertian reflectance model: the identity information will be canceled out by the pair-wise difference images from the same subject in logarithmic domain, and only the subject-independent illumination variation retains. Secondly, inspired from this observation, we propose to “borrow” illumination variations from any generic subject by constructing an illumination variation dictionary composed of pair-wise difference images of generic subjects in logarithmic domain to cover the possible illumination variations between test and gallery samples. Experimental results on Extended Yale B and FERET face databases demonstrate the superiority of our method.", "", "In this paper, we present a face recognition method based on simultaneous sparse approximations under varying illumination. Our method consists of two main stages. In the first stage, a dictionary is learned for each face class based on given training examples which minimizes the representation error with a sparseness constraint. In the second stage, a test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. Furthermore, to handle changes in lighting conditions, we use a relighting approach based on a non-stationary stochastic filter to generate multiple images of the same person with different lighting. As a result, our algorithm has the ability to recognize human faces with good accuracy even when only a single or a very few images are provided for training. The effectiveness of the proposed method is demonstrated on publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms.", "As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.", "Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects", "", "Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition." ] }
1612.03268
2949811693
We present a Deep Convolutional Neural Network architecture which serves as a generic image-to-image regressor that can be trained end-to-end without any further machinery. Our proposed architecture: the Recursively Branched Deconvolutional Network (RBDN) develops a cheap multi-context image representation very early on using an efficient recursive branching scheme with extensive parameter sharing and learnable upsampling. This multi-context representation is subjected to a highly non-linear locality preserving transformation by the remainder of our network comprising of a series of convolutions deconvolutions without any spatial downsampling. The RBDN architecture is fully convolutional and can handle variable sized images during inference. We provide qualitative quantitative results on @math diverse tasks: relighting, denoising and colorization and show that our proposed RBDN architecture obtains comparable results to the state-of-the-art on each of these tasks when used off-the-shelf without any post processing or task-specific architectural modifications.
Denoising approaches typically assume an Additive White Gaussian Noise(AWGN) of known unknown variance. Traditional denoising approaches include ClusteringSR @cite_18 , EPLL @cite_51 , BM3D @cite_42 , NL-Bayes @cite_6 , NCSR @cite_20 , WNNM @cite_0 . Among these, BM3D @cite_42 is the most popular, very well engineered and still widely used as the state-of-the-art denoising approach. Early DCNN based denoising approaches @cite_5 @cite_54 @cite_14 @cite_23 @cite_32 required a different model to be trained for each noise variance, which limited their practical use. Recently, a Gaussian-CRF based DCNN approach (DCGRF @cite_28 ) was proposed which could explicitly model the noise variance. DCGRF could however only reliably model noise levels within a reasonable range and had to use two models: low-noise DCGRF ( @math ) and high-noise DCGRF ( @math ). In section , we show that a single model of our proposed RBDN approach trained on a wide range of noise levels ( @math ) achieves competitive results and outperforms all the previously proposed approaches at all noise levels @math .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_54", "@cite_42", "@cite_32", "@cite_6", "@cite_0", "@cite_23", "@cite_5", "@cite_51", "@cite_20" ], "mid": [ "2045079989", "2037642501", "2963366932", "2098477387", "2056370875", "2134311243", "2058005980", "2048695508", "2146337213", "", "2172275395", "1978749115" ], "abstract": [ "Where does the sparsity in image signals come from? Local and nonlocal image models have supplied complementary views toward the regularity in natural images — the former attempts to construct or learn a dictionary of basis functions that promotes the sparsity; while the latter connects the sparsity with the self-similarity of the image source by clustering. In this paper, we present a variational framework for unifying the above two views and propose a new denoising algorithm built upon clustering-based sparse representation (CSR). Inspired by the success of l 1 -optimization, we have formulated a double-header l 1 -optimization problem where the regularization involves both dictionary learning and structural structuring. A surrogate-function based iterative shrinkage solution has been developed to solve the double-header l 1 -optimization problem and a probabilistic interpretation of CSR model is also included. Our experimental results have shown convincing improvements over state-of-the-art denoising technique BM3D on the class of regular texture images. The PSNR performance of CSR denoising is at least comparable and often superior to other competing schemes including BM3D on a collection of 12 generic natural images.", "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.", "We propose a novel end-to-end trainable deep network architecture for image denoising based on a Gaussian Conditional Random Field (GCRF) model. In contrast to the existing discriminative denoising methods that train a separate model for each individual noise level, the proposed deep network explicitly models the input noise variance and hence is capable of handling a range of noise levels. Our deep network, which we refer to as deep GCRF network, consists of two sub-networks: (i) a parameter generation network that generates the pairwise potential parameters based on the noisy input image, and (ii) an inference network whose layers perform the computations involved in an iterative GCRF inference procedure. We train two deep GCRF networks (each network operates over a range of noise levels: one for low input noise levels and one for high input noise levels) discriminatively by maximizing the peak signal-to-noise ratio measure. Experiments on Berkeley segmentation and PASCALVOC datasets show that the proposed approach produces results on par with the state of-the-art without training a separate network for each individual noise level.", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "Images are often corrupted as a result of various factors that can occur during acquisition and transmission processes. Image denoising is aimed at removing or reducing noise, so that a good-quality image can be obtained for various applications. The paper presents a neural network based denoising method implemented in the wavelet transform domain. A noisy image is first wavelet transformed into four subbands, then a trained layered neural network is applied to each subband to generate noise-removed wavelet coefficients from their noisy ones. The denoised image is thereafter obtained through the inverse transform on the noise-removed wavelet coefficients. Simulation results demonstrate that this method is very efficient in removing noise. Compared with other methods performed in the wavelet domain, it requires no a priori knowledge about the noise and needs only one level of signal decomposition to obtain very good denoising results.", "Recent state-of-the-art image denoising methods use nonparametric estimation processes for @math patches and obtain surprisingly good denoising results. The mathematical and experimental evidence of two recent articles suggests that we might even be close to the best attainable performance in image denoising ever. This suspicion is supported by a remarkable convergence of all analyzed methods. Still more interestingly, most patch-based image denoising methods can be summarized in one paradigm, which unites the transform thresholding method and a Markovian Bayesian estimation. As the present paper shows, this unification is complete when the patch space is assumed to be a Gaussian mixture. Each Gaussian distribution is associated with its orthonormal basis of patch eigenvectors. Thus, transform thresholding (or a Wiener filter) is made on these local orthogonal bases. In this paper a simple patch-based Bayesian method is proposed, which on the one hand keeps most interesting features of former metho...", "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "", "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.", "Sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred, and or down-sampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm." ] }
1612.03239
2580281026
We consider the stabilization of an unstable discrete-time linear system that is observed over a channel corrupted by continuous multiplicative noise. Our main result shows that if the system growth is large enough, then the system cannot be stabilized in a second-moment sense. This is done by showing that the probability that the state magnitude remains bounded must go to zero with time. Our proof technique recursively bounds the conditional density of the system state (instead of focusing on the second moment) to bound the progress the controller can make. This sidesteps the difficulty encountered in using the standard data-rate theorem style approach; that approach does not work because the mutual information per round between the system state and the observation is potentially unbounded. It was known that a system with multiplicative observation noise can be stabilized using a simple memoryless linear strategy if the system growth is suitably bounded. In this paper, we show that while memory cannot improve the performance of a linear scheme, a simple non-linear scheme that uses one-step memory can do better than the best linear scheme.
The uncertainty threshold principle @cite_12 considers a systems with Gaussian uncertainty on the system growth factor and the control gain, and provides limits for when the system is stabilizable in a second-moment sense. Our work complements this result by considering uncertainty on the observation gain.
{ "cite_N": [ "@cite_12" ], "mid": [ "2123486671" ], "abstract": [ "This note shows that the optimal control of dynamic systems with uncertain parameters has certain limitations. In particular, by means of a simple scalar linear-quadratic optimal control example, it is shown that the infinite horizon solution does not exist if the parameter uncertainty exceeds a certain quantifiable threshold; we call this the uncertainty threshold principle. The philosophical and design implications of this result are discussed." ] }
1612.03239
2580281026
We consider the stabilization of an unstable discrete-time linear system that is observed over a channel corrupted by continuous multiplicative noise. Our main result shows that if the system growth is large enough, then the system cannot be stabilized in a second-moment sense. This is done by showing that the probability that the state magnitude remains bounded must go to zero with time. Our proof technique recursively bounds the conditional density of the system state (instead of focusing on the second moment) to bound the progress the controller can make. This sidesteps the difficulty encountered in using the standard data-rate theorem style approach; that approach does not work because the mutual information per round between the system state and the observation is potentially unbounded. It was known that a system with multiplicative observation noise can be stabilized using a simple memoryless linear strategy if the system growth is suitably bounded. In this paper, we show that while memory cannot improve the performance of a linear scheme, a simple non-linear scheme that uses one-step memory can do better than the best linear scheme.
A related problem is that of estimating a linear system over multiplicative noise. While early work on this had been limited to exploring linear estimation strategies @cite_11 @cite_14 , some recent work show a general converse result for the estimation problem over multiplicative noise for both linear and non-linear strategies @cite_17 . We note that our problem can also be interpreted as an active'' estimation problem for @math , and our impossibility result applies to both linear and non-linear control strategies. However, techniques from the estimation converse result or the data-rate theorems do not work for our setup here. Unlike the estimation problem, we cannot describe the distribution of @math in our problem since the control @math is arbitrary. For the same reason, we also cannot bound the range of @math or the rate across the observation channel to use a data-rate theorem approach.
{ "cite_N": [ "@cite_14", "@cite_17", "@cite_11" ], "mid": [ "", "2077227429", "2063695979" ], "abstract": [ "", "Control strategies for systems with information bottlenecks often follow an estimate-then-control paradigm. This paper presents a “non-coherent” system where this strategy cannot work and provides an alternative. The paper considers the estimation and control of a discrete-time linear system with continuous random observation gain, i.e. through a non-coherent channel. It is shown that such an unstable system is not mean-squared observable regardless of the density of the random observation gain: the mean-squared estimation error for any estimator must go to infinity. This is surprising in the context of threshold results for rate-limited estimation. In contrast to other results with rate-limited feedback, the paper shows that the system can be closed-loop mean-square stabilized in a certain parameter regime even though its open-loop counterpart is not mean-square observable. Finally, carry-free models (generalized deterministic models) provide an intuitive interpretation for the results.", "This paper considers optimum (MMSE) linear recursive estimation of stochastic signals in the presence of multiplicative noise in addition to measurement noise. Often problems associated with phenomena such as fading or reflection of the transmitted signal at an ionospheric layer, and also situations involving sampling, gating, or amplitude modulation, can be cast into such formulation. The different kinds of estimation problems treated include one-stage prediction, filtering, and smoothing. Algorithms are presented for discrete time as well as for continuous time estimation." ] }
1612.03242
2564591810
Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.
Built upon these generative models, conditional image generation has also been studied. Most methods utilized simple conditioning variables such as attributes or class labels @cite_7 @cite_15 @cite_41 @cite_25 . There is also work conditioned on images to generate images, including photo editing @cite_19 @cite_24 , domain transfer @cite_33 @cite_36 and super-resolution @cite_40 @cite_38 . However, super-resolution methods @cite_40 @cite_38 can only add limited details to low-resolution images and can not correct large defects as our proposed StackGAN does. Recently, several methods have been developed to generate images from unstructured text. Mansimov al @cite_20 built an AlignDRAW model by learning to estimate alignment between text and the generating canvas. Reed al @cite_32 used conditional PixelCNN to generate images using the text descriptions and object location constraints. Nguyen al @cite_34 used an approximate Langevin sampling approach to generate images conditioned on text. However, their sampling approach requires an inefficient iterative optimization process. With conditional GAN, Reed al @cite_28 successfully generated plausible 64 @math 64 images for birds and flowers based on text descriptions. Their follow-up work @cite_26 was able to generate 128 @math 128 images by utilizing additional annotations on object part locations.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_7", "@cite_41", "@cite_36", "@cite_28", "@cite_32", "@cite_24", "@cite_19", "@cite_40", "@cite_15", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "", "1893585201", "2553897675", "", "", "", "", "", "", "2524985544", "2950560720", "", "2952122856", "", "2155292833" ], "abstract": [ "", "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.", "We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.", "", "", "", "", "", "", "The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.", "Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high-resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Furthermore, MAP inference is often performed via optimisation-based iterative algorithms which don't compare well with the efficiency of neural-network-based alternatives. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders.", "", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "", "Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset." ] }
1612.03242
2564591810
Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.
Besides using a single GAN for generating images, there is also work @cite_27 @cite_17 @cite_16 that utilized a series of GANs for image generation. Wang al @cite_27 factorized the indoor scene generation process into structure generation and style generation with the proposed @math -GAN. In contrast, the second stage of our StackGAN aims to complete object details and correct defects of Stage- 1 results based on text descriptions. Denton al @cite_37 built a series of GANs within a Laplacian pyramid framework. At each level of the pyramid, a residual image was generated conditioned on the image of the previous stage and then added back to the input image to produce the input for the next stage. Concurrent to our work, Huang al @cite_16 also showed that they can generate better images by stacking several GANs to reconstruct the multi-level representations of a pre-trained discriminative model. However, they only succeeded in generating 32 @math 32 images, while our method utilizes a simpler architecture to generate 256 @math 256 images with photo-realistic details and sixty-four times more pixels.
{ "cite_N": [ "@cite_27", "@cite_16", "@cite_37", "@cite_17" ], "mid": [ "2298992465", "2952010110", "1909320841", "" ], "abstract": [ "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.", "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.", "" ] }
1612.03205
2565321876
Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaningfully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse.
In the past few years there has been a significant amount of work dedicated to the evaluation of natural language generation @cite_2 , dealing with different aspects of evaluation methodology. However, most of this work focuses on simple tasks, such as referring expressions generation. For example, Belz and Kow investigated the impact of continuous and discrete scales for generated weather descriptions, as well as and simple image descriptions that typically consist of a few words (e.g., " |the small blue fan|").
{ "cite_N": [ "@cite_2" ], "mid": [ "2251762574" ], "abstract": [ "Interactive systems have become an increasingly important type of application for deployment of NLG technology over recent years. At present, we do not yet have commonly agreed terminology or methodology for evaluating NLG within interactive systems. In this paper, we take steps towards addressing this gap by presenting a set of principles for designing new evaluations in our comparative evaluation methodology. We start with presenting a categorisation framework, giving an overview of different categories of evaluation measures, in order to provide standard terminology for categorising existing and new evaluation techniques. Background on existing evaluation methodologies for NLG and interactive systems is presented. The comparative evaluation methodology is presented. Finally, a methodology for comparative evaluation of NLG components embedded within interactive systems is presented in terms of the comparative evaluation methodology, using a specific task for illustrative purposes." ] }
1612.03117
2565558311
Bayesian Optimization (BO) has become a core method for solving expensive black-box optimization problems. While much research focussed on the choice of the acquisition function, we focus on online length-scale adaption and the choice of kernel function. Instead of choosing hyperparameters in view of maximum likelihood on past data, we propose to use the acquisition function to decide on hyperparameter adaptation more robustly and in view of the future optimization progress. Further, we propose a particular kernel function that includes non-stationarity and local anisotropy and thereby implicitly integrates the efficiency of local convex optimization with global Bayesian optimization. Comparisons to state-of-the art BO methods underline the efficiency of these mechanisms on global optimization benchmarks.
Classical online model adaption in BO through maximum likelihood or LOO-CV are compared in @cite_10 . LOO-CV turns out to be more robust under model misspecification whereas maximum likelihood gains better results as long as the model is chosen 'well'. @cite_5 already discusses the problem of highly misleading initial objective function samples. In @cite_9 an idea earlier mentioned in @cite_5 is studied, where the hyperparameter adaption is combined with the EI calculation in one step. The authors show better performance in certain cases, while the resulting sub-optimization can get quite tedious as described in @cite_8 . Another approach for improving the length-scale hyperparameter adaption is presented in @cite_4 . They try to limit local exploration by setting an upper bound on the length-scale. The bound they propose is independent of dimensionality and experimentally chosen by hand for their applications. Those papers address the same problem but all of them are modifications of the basic approach to adjust hyperparameters to maximize data likelihood. In contrast we adjust hyperparameters based on the aquisition function, aiming at optimization performance rather than data likelihood.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_5", "@cite_10" ], "mid": [ "1531831816", "", "2060312275", "2151238122", "2034831667" ], "abstract": [ "Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stationarity in the objective function. Consequently, heteroscedasticity negatively affects performance of traditional Bayesian methods. In this paper, we propose a novel prior model with hierarchical parameter learning that tackles the problem of non-stationarity in Bayesian optimisation. Our results demonstrate substantial improvements in a wide range of applications, including automatic machine learning and mining exploration.", "", "This paper introduces a new method of calculating the expected improvement infill criterion, which does not rely on accurate model parameter estimation. The parameter estimation is embedded within the search of the infill criterion, wherein parameter changes are assessed using likelihood ratio tests. Unlike the traditional expected improvement, a new formulation we present cannot be fooled' by unlucky sampling or deceptive functions. The new method is introduced both mathematically and illustratively using a one-variable test function. It is then shown to outperform traditional expected improvement when optimizing the geometry of a passive vibration isolating truss.", "This paper presents a taxonomy of existing approaches for using response surfaces for global optimization. Each method is illustrated with a simple numerical example that brings out its advantages and disadvantages. The central theme is that methods that seem quite reasonable often have non-obvious failure modes. Understanding these failure modes is essential for the development of practical algorithms that fulfill the intuitive promise of the response surface approach.", "The Maximum Likelihood (ML) and Cross Validation (CV) methods for estimating covariance hyper-parameters are compared, in the context of Kriging with a misspecified covariance structure. A two-step approach is used. First, the case of the estimation of a single variance hyper-parameter is addressed, for which the fixed correlation function is misspecified. A predictive variance based quality criterion is introduced and a closed-form expression of this criterion is derived. It is shown that when the correlation function is misspecified, the CV does better compared to ML, while ML is optimal when the model is well-specified. In the second step, the results of the first step are extended to the case when the hyper-parameters of the correlation function are also estimated from data." ] }
1612.03117
2565558311
Bayesian Optimization (BO) has become a core method for solving expensive black-box optimization problems. While much research focussed on the choice of the acquisition function, we focus on online length-scale adaption and the choice of kernel function. Instead of choosing hyperparameters in view of maximum likelihood on past data, we propose to use the acquisition function to decide on hyperparameter adaptation more robustly and in view of the future optimization progress. Further, we propose a particular kernel function that includes non-stationarity and local anisotropy and thereby implicitly integrates the efficiency of local convex optimization with global Bayesian optimization. Comparisons to state-of-the art BO methods underline the efficiency of these mechanisms on global optimization benchmarks.
@cite_7 introduce the idea of length-scale adaption based on maximizing the acquisition function (EI) value, which is not efficient as they say (and different to the cool down we propose). Nevertheless we endorse the underlying idea, since it is related to our motivation.
{ "cite_N": [ "@cite_7" ], "mid": [ "2293920654" ], "abstract": [ "The Efficient Global Optimization (EGO) algorithm uses a conditional Gaus-sian Process (GP) to approximate an objective function known at a finite number of observation points and sequentially adds new points which maximize the Expected Improvement criterion according to the GP. The important factor that controls the efficiency of EGO is the GP covariance function (or kernel) which should be chosen according to the objective function. Traditionally, a pa-rameterized family of covariance functions is considered whose parameters are learned through statistical procedures such as maximum likelihood or cross-validation. However, it may be questioned whether statistical procedures for learning covariance functions are the most efficient for optimization as they target a global agreement between the GP and the observations which is not the ultimate goal of optimization. Furthermore, statistical learning procedures are computationally expensive. The main alternative to the statistical learning of the GP is self-adaptation, where the algorithm tunes the kernel parameters based on their contribution to objective function improvement. After questioning the possibility of self-adaptation for kriging based optimizers, this paper proposes a novel approach for tuning the length-scale of the GP in EGO: At each iteration, a small ensemble of kriging models structured by their length-scales is created. All of the models contribute to an iterate in an EGO-like fashion. Then, the set of models is densified around the model whose length-scale yielded the best iterate and further points are produced. Numerical experiments are provided which motivate the use of many length-scales. The tested implementation does not perform better than the classical EGO algorithm in a sequential context but show the potential of the approach for parallel implementations." ] }
1612.03117
2565558311
Bayesian Optimization (BO) has become a core method for solving expensive black-box optimization problems. While much research focussed on the choice of the acquisition function, we focus on online length-scale adaption and the choice of kernel function. Instead of choosing hyperparameters in view of maximum likelihood on past data, we propose to use the acquisition function to decide on hyperparameter adaptation more robustly and in view of the future optimization progress. Further, we propose a particular kernel function that includes non-stationarity and local anisotropy and thereby implicitly integrates the efficiency of local convex optimization with global Bayesian optimization. Comparisons to state-of-the art BO methods underline the efficiency of these mechanisms on global optimization benchmarks.
There are also concepts regarding locally defined kernels, e.g. @cite_1 . The idea of @cite_3 is somehow closely related to ours, because they use a local and a global kernel function, which is a great approach, as we believe. They parametrize the location of the local kernel as well as the respective parameters. Consequently they end up with a large number of hyperparameters which makes model selection very difficult. In constrast to their work we are able to gain comparable or better performance in well-known benchmarks. At the same time we overcome the problem of many hyperparameters by a separated, efficient algorithm for determining the location of local minimum regions. Furthermore we use a non-isotropic kernel for better fitting local minimum regions.
{ "cite_N": [ "@cite_1", "@cite_3" ], "mid": [ "2021774297", "1533803232" ], "abstract": [ "When monitoring spatial phenomena, such as the ecological condition of a river, deciding where to make observations is a challenging task. In these settings, a fundamental question is when an active learning, or sequential design, strategy, where locations are selected based on previous measurements, will perform significantly better than sensing at an a priori specified set of locations. For Gaussian Processes (GPs), which often accurately model spatial phenomena, we present an analysis and efficient algorithms that address this question. Central to our analysis is a theoretical bound which quantifies the performance difference between active and a priori design strategies. We consider GPs with unknown kernel parameters and present a nonmyopic approach for trading off exploration, i.e., decreasing uncertainty about the model parameters, and exploitation, i.e., near-optimally selecting observations when the parameters are (approximately) known. We discuss several exploration strategies, and present logarithmic sample complexity bounds for the exploration phase. We then extend our algorithm to handle nonstationary GPs exploiting local structure in the model. We also present extensive empirical evaluation on several real-world problems.", "Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in \"log-space,\" to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably." ] }
1612.03000
2952467095
The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.
Much work has been done on exploring the concept of computation offloading and applying it on mobile devices. MAUI @cite_16 supports code offloading from smartphones to nearby servers or devices in order to minimize energy consumption. The results show significant energy savings when Wi-Fi is used, while when using 3G the results are not very satisfactory. CloneCloud @cite_20 aims to benefit directly from the cloud, transforming a mobile application by migrating parts of its execution to a virtual machine on the cloud. ThinkAir @cite_18 combines the advantages of these frameworks and works with Wi-Fi and 3G offloading to nearby or remote surrogates. Furthermore, ThinkAir allows for the computational power to be dynamically scaled up or down on the cloud, enabling high levels of flexibility for the developers.
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_20" ], "mid": [ "2088692353", "", "2023380813" ], "abstract": [ "Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.", "", "Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device." ] }
1612.03000
2952467095
The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.
More recently, Serendipity @cite_2 introduces the concept of mobile--to--mobile offloading in an environment with intermittent connectivity. This system is capable of conserving energy and increasing computation speed of low--power devices when these offload heavy computations to more powerful ones. Nevertheless, similar to other previous frameworks, Serendipity relies on Wi-Fi, and the paper does not specify how to search for and detect the available devices that are willing to help. OPENRP advances in this direction by collecting data from interactions between mobile users and building reputation scores per mobile user and application type @cite_7 . Honeybee @cite_6 is an offloading framework for mobile computing on Bluetooth channels. Without having to rely on Wi-Fi, it guarantees connectivity as long as other mobile devices equipped with Bluetooth are available as well.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "2256899642", "2473552526", "2035309464" ], "abstract": [ "Although smartphones are increasingly becoming more and more powerful, enabling pervasiveness is severely hindered by the resource limitations of mobile devices. The combination of social interactions and mobile devices in the form of ‘crowd computing’ has the potential to surpass these limitations. In this paper, we introduce Honeybee; a crowd computing framework for mobile devices. Honeybee enables mobile devices to share work, utilize local resources and human collaboration in the mobile context. It employs ‘work stealing’ to effectively load balance tasks across nodes that are a priori unknown. We describe the design of Honeybee, and report initial experimental data from applications implemented using Honeybee.", "The concepts of wisdom of crowd and collective intelligence have been utilized by mobile application developers to achieve large-scale distributed computation, known as crowd computing. The profitability of this method heavily depends on users' social interactions and their willingness to share resources. Thus, different crowd computing applications need to adopt mechanisms that motivate peers to collaborate and defray the costs of participating ones who share their resources. In this article, we propose OPENRP, a novel, lightweight, and scalable system middleware that provides a unified interface to crowd computing and opportunistic networking applications. When an application wants to perform a device-to-device task, it delegates the task to the middleware, which takes care of choosing the best peers with whom to collaborate and sending the task to these peers. OPENRP evaluates and updates the reputation of participating peers based on their mutual opportunistic interactions. To show the benefits of the middleware, we simulated the behavior of two representative crowdsourcing applications: message forwarding and task offloading. Through extensive simulations on real human mobility traces, we show that the traffic generated by the applications is lower compared to two benchmark strategies. As a consequence, we show that when using our middleware, the energy consumed by the nodes is reduced. Finally, we show that when dividing the nodes into selfish and altruistic, the reputation scores of the altruistic peers increase with time, while those of the selfish ones decrease.", "Mobile devices are increasingly being relied on for services that go beyond simple connectivity and require more complex processing. Fortunately, a mobile device encounters, possibly intermittently, many entities capable of lending it computational resources. At one extreme is the traditional cloud-computing context where a mobile device is connected to remote cloud resources maintained by a service provider with which it has an established relationship. In this paper we consider the other extreme, where a mobile device's contacts are only with other mobile devices, where both the computation initiator and the remote computational resources are mobile, and where intermittent connectivity among these entities is the norm. We present the design and implementation of a system, Serendipity, that enables a mobile computation initiator to use remote computational resources available in other mobile systems in its environment to speedup computing and conserve energy. We propose a simple but powerful job structure that is suitable for such a system. Serendipity relies on the collaboration among mobile devices for task allocation and task progress monitoring functions. We develop algorithms that are designed to disseminate tasks among mobile devices by accounting for the specific properties of the available connectivity. We also undertake an extensive evaluation of our system, including experience with a prototype, that demonstrates Serendipity's performance." ] }
1612.03000
2952467095
The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.
Seen that mobile code offloading has already become well accepted and its advantages have been acknowledged by many authors, researchers lately have been focusing on building more solid frameworks that consider long neglected aspects, such as security, fault--tolerance, caching, etc. @cite_21 . @cite_22 replicate mobile applications, which are split into execution phases in mobile servers, efficiently selecting the proper replica to proceed in the next phase, in order to improve the end users' quality of experience. propose a security protocol for authentication between NFC applications and proximal cloudlets motivated by the fact that NFC applications can be computationally demanding @cite_17 .
{ "cite_N": [ "@cite_21", "@cite_22", "@cite_17" ], "mid": [ "1734019305", "", "2094922928" ], "abstract": [ "Modern applications face new challenges in managing today's highly distributed and heterogeneous environment. For example, they must stitch together code that crosses smartphones, tablets, personal devices, and cloud services, connected by variable wide-area networks, such as WiFi and 4G. This paper describes Sapphire, a distributed programming platform that simplifies the programming of today's mobile cloud applications. Sapphire's key design feature is its distributed runtime system, which supports a flexible and extensible deployment layer for solving complex distributed systems tasks, such as fault-tolerance, code-offloading, and caching. Rather than writing distributed systems code, programmers choose deployment managers that extend Sapphire's kernel to meet their applications' deployment requirements. In this way, each application runs on an underlying platform that is customized for its own distribution needs.", "", "The availability of NFC capabilities on smartphones has facilitated the development of a large number of related applications. Some of these applications may be resource-intensive tasks, and Cloudlets-based mobile computing are a good candidate to offload computation while being free of WAN delays, jitter, congestion, and failures. In this context, new use cases dedicated to NFC applications based on cloudlets are presented and a security protocol is proposed to authenticate the cloudlets by the mobile devices. The secure element of the mobile device is a trust environment used to store sensitive data and to perform cryptographic calculations." ] }
1612.03000
2952467095
The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.
However, the current state of their project is quite preliminary and does not present any real implementation. Furthermore, the design is quite limited, due to the fact that it requires the user to constantly tap on the device: once when offloading the security computation and another tap when receiving the result. Conversely, in this work we have redesigned the NFC communication protocol to eliminate the need for user intervention, which enables a convenient and automated offloading process. NFC-based computation offloading does not need to consider most of the problems that traditional offloading frameworks face. For example, the low--range communication capabilities of NFC eliminate the need for data encryption, which of course is an overhead that current frameworks have to deal with @cite_4 . Moreover, our architecture allows the mobile devices to automatically connect with the powerful offloadee entities, since they will be in close NFC proximity, eliminating the need for long registration process as presented in @cite_18 or intentionally neglected as in other works.
{ "cite_N": [ "@cite_18", "@cite_4" ], "mid": [ "2088692353", "1569746202" ], "abstract": [ "Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.", "Despite their steadily increasing capabilities, mobile end-user devices such as smart phones often suffer from reduced processing and storage resources. Cloud-based mobile augmentation (CMA) has recently emerged as a potential solution to this problem. CMA combines concepts of cloud computing and surrogate computing in order to offload resource-intensive tasks to external resources. During the past years, different CMA frameworks have been introduced that enable the development and usage of CMA-based applications. Unfortunately, these frameworks have usually not been designed with security in mind but instead mainly focus on efficient offloading and reintegration mechanisms. Hence, reliance on CMA concepts in security-critical fields of application is currently not advisable. To address this problem, this paper surveys currently available CMA frameworks and assesses their suitability and applicability in security-critical fields of application. For this purpose, relevant security requirements are identified and mapped to the surveyed CMA frameworks. Results obtained from this assessment show that none of the surveyed CMA framework is currently able to meet all relevant security requirements. By identifying security limitations of currently available CMA frameworks, this paper represents a first important step towards development of a secure CMA framework and hence paves the way for a use of CMA-based applications in security-critical fields of application." ] }
1612.02780
2562420335
We present a framework to understand GAN training as alternating density ratio estimation and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. We also derive a family of generator objectives that target arbitrary @math -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.
Several recent papers have identified novel objectives for GAN generators. In @cite_9 , they propose a generator objective corresponding to @math being reverse KL, and show that it improves performance on image super-resolution. @cite_2 identifies the generator objective that corresponds to minimizing the KL divergence, but does not empirically evaluate this objective.
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2950560720", "284754667" ], "abstract": [ "Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high-resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Furthermore, MAP inference is often performed via optimisation-based iterative algorithms which don't compare well with the efficiency of neural-network-based alternatives. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders.", "Two recently introduced criteria for estimation of generative models are both based on a reduction to binary classification. Noise-contrastive estimation (NCE) is an estimation procedure in which a generative model is trained to be able to distinguish data samples from noise samples. Generative adversarial networks (GANs) are pairs of generator and discriminator networks, with the generator network learning to generate samples by attempting to fool the discriminator network into believing its samples are real data. Both estimation procedures use the same function to drive learning, which naturally raises questions about how they are related to each other, as well as whether this function is related to maximum likelihood estimation (MLE). NCE corresponds to training an internal data model belonging to the discriminator network but using a fixed generator network. We show that a variant of NCE, with a dynamic generator network, is equivalent to maximum likelihood estimation. Since pairing a learned discriminator with an appropriate dynamically selected generator recovers MLE, one might expect the reverse to hold for pairing a learned generator with a certain discriminator. However, we show that recovering MLE for a learned generator requires departing from the distinguishability game. Specifically: (i) The expected gradient of the NCE discriminator can be made to match the expected gradient of MLE, if one is allowed to use a non-stationary noise distribution for NCE, (ii) No choice of discriminator network can make the expected gradient for the GAN generator match that of MLE, and (iii) The existing theory does not guarantee that GANs will converge in the non-convex case. This suggests that the key next step in GAN research is to determine whether GANs converge, and if not, to modify their training algorithm to force convergence." ] }
1612.02780
2562420335
We present a framework to understand GAN training as alternating density ratio estimation and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. We also derive a family of generator objectives that target arbitrary @math -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.
Concurrent with our work, two papers propose closely related GAN training algorithms. In @cite_6 , they directly estimate the density ratio by optimizing a different discriminator objective that corresponds to rewriting the discriminator in terms of the density ratio: This approach requires learning a network that directly outputs the density ratio, which can be very small or very large and in practice the networks that parameterize the density ratio must be clipped @cite_6 . We found estimating a function of the density ratio to be more stable, in particular using the GAN discriminator objective the discriminator @math estimates @math . However, there are likely ways of combining these approaches in the future to directly estimate stable functions of the density ratio independent of the discriminator divergence.
{ "cite_N": [ "@cite_6" ], "mid": [ "2529989861" ], "abstract": [ "Generative adversarial networks (GANs) are successful deep generative models. GANs are based on a two-player minimax game. However, the objective function derived in the original motivation is changed to obtain stronger gradients when learning the generator. We propose a novel algorithm that repeats the density ratio estimation and f-divergence minimization. Our algorithm offers a new perspective toward the understanding of GANs and is able to make use of multiple viewpoints obtained in the research of density ratio estimation, e.g. what divergence is stable and relative density ratio is useful." ] }
1612.02780
2562420335
We present a framework to understand GAN training as alternating density ratio estimation and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. We also derive a family of generator objectives that target arbitrary @math -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.
More generically, the training process can be thought of as two interacting systems: one that identifies a statistic of the model and data, and another that uses that statistic to make the model closer to the data. @cite_1 discusses many approaches similar to the one presented here, but do not present experimental results.
{ "cite_N": [ "@cite_1" ], "mid": [ "2530741948" ], "abstract": [ "Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative models with several appealing properties: they do not require a likelihood function to be specified, only a generating procedure; they provide samples that are sharp and compelling; and they allow us to harness our knowledge of building highly accurate neural network classifiers. Here, we develop our understanding of GANs with the aim of forming a rich view of this growing area of machine learning---to build connections to the diverse set of statistical thinking on this topic, of which much can be gained by a mutual exchange of ideas. We frame GANs within the wider landscape of algorithms for learning in implicit generative models---models that only specify a stochastic procedure with which to generate data---and relate these ideas to modelling problems in related fields, such as econometrics and approximate Bayesian computation. We develop likelihood-free inference methods and highlight hypothesis testing as a principle for learning in implicit generative models, using which we are able to derive the objective function used by GANs, and many other related objectives. The testing viewpoint directs our focus to the general problem of density ratio estimation. There are four approaches for density ratio estimation, one of which is a solution using classifiers to distinguish real from generated data. Other approaches such as divergence minimisation and moment matching have also been explored in the GAN literature, and we synthesise these views to form an understanding in terms of the relationships between them and the wider literature, highlighting avenues for future exploration and cross-pollination." ] }
1612.02541
2566001802
Hashing methods have attracted much attention for large scale image retrieval. Some deep hashing methods have achieved promising results by taking advantage of the strong representation power of deep networks recently. However, existing deep hashing methods treat all hash bits equally. On one hand, a large number of images share the same distance to a query image due to the discrete Hamming distance, which raises a critical issue of image retrieval where fine-grained rankings are very important. On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image. To address the above two problems, we propose the query-adaptive deep weighted hashing (QaDWH) approach, which can perform fine-grained ranking for different queries by weighted Hamming distance. First, a novel deep hashing network is proposed to learn the hash codes and corresponding class-wise weights jointly, so that the learned weights can reflect the importance of different hash bits for different image classes. Second, a query-adaptive image retrieval method is proposed, which rapidly generates hash bit weights for different query images by fusing its semantic probability and the learned class-wise weights. Fine-grained image retrieval is then performed by the weighted Hamming distance, which can provide more accurate ranking than the traditional Hamming distance. Experiments on four widely used datasets show that the proposed approach outperforms eight state-of-the-art hashing methods.
Different from the two-stage framework in CNNH @cite_31 , Network in Network Hashing (NINH) @cite_8 performs image representation learning and hash code learning jointly. NINH constructs deep framework based on the Network in Network architecture @cite_26 , with a shared sub-network composed of several stacked convolutional layers to extract image features, as well as a divide-and-encode module encouraged by sigmoid activation function and a piece-wise threshold function to output binary hash codes. During the learning process, instead of generating approximate hash codes in advance, NINH utilizes a triplet ranking loss function to exploit the relative similarity of training images to directly guide hash learning: where @math and @math specify the triplet constraint that image @math is more similar to image @math than to image @math based on image labels; @math denotes binary hash code, and @math denotes the Hamming distance. For easy optimization of equation ), NINH applies two relaxation tricks: relaxing the integer constraint on binary hash code and replacing Hamming distance with Euclidean distance.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_8" ], "mid": [ "2293824885", "", "1939575207" ], "abstract": [ "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.", "", "Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods." ] }
1612.02541
2566001802
Hashing methods have attracted much attention for large scale image retrieval. Some deep hashing methods have achieved promising results by taking advantage of the strong representation power of deep networks recently. However, existing deep hashing methods treat all hash bits equally. On one hand, a large number of images share the same distance to a query image due to the discrete Hamming distance, which raises a critical issue of image retrieval where fine-grained rankings are very important. On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image. To address the above two problems, we propose the query-adaptive deep weighted hashing (QaDWH) approach, which can perform fine-grained ranking for different queries by weighted Hamming distance. First, a novel deep hashing network is proposed to learn the hash codes and corresponding class-wise weights jointly, so that the learned weights can reflect the importance of different hash bits for different image classes. Second, a query-adaptive image retrieval method is proposed, which rapidly generates hash bit weights for different query images by fusing its semantic probability and the learned class-wise weights. Fine-grained image retrieval is then performed by the weighted Hamming distance, which can provide more accurate ranking than the traditional Hamming distance. Experiments on four widely used datasets show that the proposed approach outperforms eight state-of-the-art hashing methods.
There are several extensions based on NINH, such as Bit-scalable Deep Hashing method @cite_37 further manipulates hash code length by weighing each bit of hash codes. Deep Hashing Network (DHN) @cite_5 additionally minimizes the quantization errors besides triplet ranking loss to improve retrieval precision. Deep semantic preserving and ranking-based hashing (DSRH) @cite_41 introduces orthogonal constraints into triplet ranking loss to make hash bits independent. The above deep hashing methods treat all hash bits equally, which leads to a coarse ranking among images with the same hamming distance and achieves limited retrieval accuracy.
{ "cite_N": [ "@cite_41", "@cite_5", "@cite_37" ], "mid": [ "2531549126", "", "1951304353" ], "abstract": [ "Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semantic-preserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Mean-while, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-the-art hashing techniques.", "", "Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths." ] }
1612.02541
2566001802
Hashing methods have attracted much attention for large scale image retrieval. Some deep hashing methods have achieved promising results by taking advantage of the strong representation power of deep networks recently. However, existing deep hashing methods treat all hash bits equally. On one hand, a large number of images share the same distance to a query image due to the discrete Hamming distance, which raises a critical issue of image retrieval where fine-grained rankings are very important. On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image. To address the above two problems, we propose the query-adaptive deep weighted hashing (QaDWH) approach, which can perform fine-grained ranking for different queries by weighted Hamming distance. First, a novel deep hashing network is proposed to learn the hash codes and corresponding class-wise weights jointly, so that the learned weights can reflect the importance of different hash bits for different image classes. Second, a query-adaptive image retrieval method is proposed, which rapidly generates hash bit weights for different query images by fusing its semantic probability and the learned class-wise weights. Fine-grained image retrieval is then performed by the weighted Hamming distance, which can provide more accurate ranking than the traditional Hamming distance. Experiments on four widely used datasets show that the proposed approach outperforms eight state-of-the-art hashing methods.
Hamming distances are discrete integers that can not perform fine-grained ranking for those images sharing same distances with a query image. Some hash bits weighting methods @cite_12 @cite_29 @cite_3 @cite_4 @cite_44 @cite_16 are proposed to address this issue. QaRank @cite_46 @cite_3 first learns class-specific weights by minimizing the intra-class similarity and maintaining the inter-class relations, then generates query-adaptive weights by using top similar images' labels. QsRank @cite_16 designs a ranking algorithm for PCA-based hashing method, which uses the probability that @math -neighbors of query @math map to hash code @math to measure the ranking score of hash codes @math . WhRank @cite_44 proposes a weighted Hamming distance ranking algorithm by data-adaptive weight and query-sensitive bitwise weight. QRank @cite_12 @cite_4 learns query-adaptive weights by exploiting both the discriminative power of each hash function and their complement for nearest neighbor search. The aforementioned traditional weighted hashing methods are all two-stage schemes, which take hash codes generated by other methods (such as LSH, SH and ITQ) as input, then learn the weights by analyzing the fixed hash codes and image features. The two-stage scheme causes that the learned weights can't give feedback for learning better hash codes, which limits the retrieval accuracy.
{ "cite_N": [ "@cite_4", "@cite_29", "@cite_3", "@cite_44", "@cite_46", "@cite_16", "@cite_12" ], "mid": [ "2247935935", "", "1964858027", "2170314267", "2100839471", "2015509951", "1987566020" ], "abstract": [ "Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.", "", "Scalable image search based on visual similarity has been an active topic of research in recent years. State-of-the-art solutions often use hashing methods to embed high-dimensional image features into Hamming space, where search can be performed in real-time based on Hamming distance of compact hash codes. Unlike traditional metrics (e.g., Euclidean) that offer continuous distances, the Hamming distances are discrete integer values. As a consequence, there are often a large number of images sharing equal Hamming distances to a query, which largely hurts search results where fine-grained ranking is very important. This paper introduces an approach that enables query-adaptive ranking of the returned images with equal Hamming distances to the queries. This is achieved by firstly offline learning bitwise weights of the hash codes for a diverse set of predefined semantic concept classes. We formulate the weight learning process as a quadratic programming problem that minimizes intra-class distance while preserving inter-class relationship captured by original raw image features. Query-adaptive weights are then computed online by evaluating the proximity between a query and the semantic concept classes. With the query-adaptive bitwise weights, returned images can be easily ordered by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level. Experiments on a Flickr image dataset show clear improvements from our proposed approach.", "Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.", "With the proliferation of images on the Web, fast search of visually similar images has attracted significant attention. State-of-the-art techniques often embed high-dimensional visual features into low-dimensional Hamming space, where search can be performed in real-time based on Hamming distance of compact binary codes. Unlike traditional metrics (e.g., Euclidean) of raw image features that produce continuous distance, the Hamming distances are discrete integer values. In practice, there are often a large number of images sharing equal Hamming distances to a query, resulting in a critical issue for image search where ranking is very important. In this paper, we propose a novel approach that facilitates query-adaptive ranking for the images with equal Hamming distance. We achieve this goal by firstly offline learning bit weights of the binary codes for a diverse set of predefined semantic concept classes. The weight learning process is formulated as a quadratic programming problem that minimizes intra-class distance while preserving interclass relationship in the original raw image feature space. Query-adaptive weights are then rapidly computed by evaluating the proximity between a query and the concept categories. With the adaptive bit weights, the returned images can be ordered by weighted Hamming distance at a finer-grained binary code level rather than at the original integer Hamming distance level. Experimental results on a Flickr image dataset show clear improvements from our query-adaptive ranking approach.", "Although binary hash code-based image indexing methods have been recently developed for large-scale applications, the problem of ranking such hash codes has been barely studied. In this paper, we propose a query sensitive ranking algorithm (QsRank) to rank PCA-based hash codes for the ∊-neighbor search problem. The QsRank algorithm takes the target neighborhood radius ∊ and the raw feature of a given query as input, and models the statistical properties of the target ∊-neighbors in the space of hash codes. Unlike the Hamming distance, the proposed algorithm does not compress query points to hash codes. Therefore, it suffers less information loss and is more effective than Hamming distance-based approaches. Based on the QsRank method, we developed an efficient indexing structure and retrieval algorithm for large-scale ∊-neighbor search. Evaluations on two datasets of 10 million web images and 10 million SIFT descriptors demonstrate that the proposed retrieval system achieves higher accuracy with less memory cost and faster speed.", "Recently hash-based nearest neighbor search has become attractive in many applications due to its compressed storage and fast query speed. However, the quantization in the hashing process usually degenerates its discriminative power when using Hamming distance ranking. To enable fine-grained ranking, hash bit weighting has been proved as a promising solution. Though achieving satisfying performance improvement, state-of-the-art weighting methods usually heavily rely on the projection's distribution assumption, and thus can hardly be directly applied to more general types of hashing algorithms. In this paper, we propose a new ranking method named QRank with query-adaptive bitwise weights by exploiting both the discriminative power of each hash function and their complement for nearest neighbor search. QRank is a general weighting method for all kinds of hashing algorithms without any strict assumptions. Experimental results on two well-known benchmarks MNIST and NUS-WIDE show that the proposed method can achieve up to 17.11 performance gains over state-of-the-art methods." ] }
1612.02534
2585034924
Measuring visual similarity is critical for image understanding. But what makes two images similar? Most existing work on visual similarity assumes that images are similar because they contain the same object instance or category. However, the reason why images are similar is much more complex. For example, from the perspective of category, a black dog image is similar to a white dog image. However, in terms of color, a black dog image is more similar to a black horse image than the white dog image. This example serves to illustrate that visual similarity is ambiguous but can be made precise when given an explicit contextual perspective. Based on this observation, we propose the concept of contextual visual similarity. To be concrete, we examine the concept of contextual visual similarity in the application domain of image search. Instead of providing only a single image for image similarity search ( , Google image search), we require three images. Given a query image, a second positive image and a third negative image, dissimilar to the first two images, we define a contextualized similarity search criteria. In particular, we learn feature weights over all the feature dimensions of each image such that the distance between the query image and the positive image is small and their distances to the negative image are large after reweighting their features. The learned feature weights encode the contextualized visual similarity specified by the user and can be used for attribute specific image search. We also show the usefulness of our contextualized similarity weighting scheme for different tasks, such as answering visual analogy questions and unsupervised attribute discovery.
The problem of solving analogy questions has been well explored in NLP. For example, Latent Relational Analysis (LRA) introduced in @cite_9 can be used to solve SAT word analogy questions. In the visual domain, @cite_4 proposes an analogy-preserving semantic embedding, which is proved useful for object categorization. @cite_13 and @cite_18 can synthesize an analogous' image @math that relates to image @math in the same way as image @math related to image @math . @cite_18 demonstrates the effectiveness of their model on synthetic data, where images are controlled and visual analogies mainly involve relatively low-level visual properties such as rotation. Instead of synthesizing new images, VISALOGY @cite_1 solves visual analogies by discovering the mapping from image A to image B and searching for an image D such that A to B holds for C to D. VISALOGY conducts experiments on natural image datasets, where visual analogies involve different high-level semantic properties. Our work is different in that we do not explicitly model the mapping between image A and B. Instead, we use cues derived from contextual visual similarity relationships to solve visual analogies.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_1", "@cite_13" ], "mid": [ "2184218725", "2146937173", "2109830295", "2950771721", "2022692239" ], "abstract": [ "In addition to identifying the content within a single image, relating images and generating related images are critical tasks for image understanding. Recently, deep convolutional networks have yielded breakthroughs in predicting image labels, annotations and captions, but have only just begun to be used for generating high-quality images. In this paper we develop a novel deep network trained end-to-end to perform visual analogy making, which is the task of transforming a query image according to an example pair of related images. Solving this problem requires both accurately recognizing a visual relationship and generating a transformed query image accordingly. Inspired by recent advances in language modeling, we propose to solve visual analogies by learning to map images to a neural embedding in which analogical reasoning is simple, such as by vector subtraction and addition. In experiments, our model effectively models visual analogies on several datasets: 2D shapes, animated video game sprites, and 3D car models.", "In multi-class categorization tasks, knowledge about the classes' semantic relationships can provide valuable information beyond the class labels themselves. However, existing techniques focus on preserving the semantic distances between classes (e.g., according to a given object taxonomy for visual recognition), limiting the influence to pairwise structures. We propose to model analogies that reflect the relationships between multiple pairs of classes simultaneously, in the form \"p is to q, as r is to s\". We translate semantic analogies into higher-order geometric constraints called analogical parallelograms, and use them in a novel convex regularizer for a discriminatively learned label embedding. Furthermore, we show how to discover analogies from attribute-based class descriptions, and how to prioritize those likely to reduce inter-class confusion. Evaluating our Analogy-preserving Semantic Embedding (ASE) on two visual recognition datasets, we demonstrate clear improvements over existing approaches, both in terms of recognition accuracy and analogy completion.", "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.", "In this paper, we study the problem of answering visual analogy questions. These questions take the form of image A is to image B as image C is to what. Answering these questions entails discovering the mapping from image A to image B and then extending the mapping to image C and searching for the image D such that the relation from A to B holds for C to D. We pose this problem as learning an embedding that encourages pairs of analogous images with similar transformations to be close together using convolutional neural networks with a quadruple Siamese architecture. We introduce a dataset of visual analogy questions in natural images, and show first results of its kind on solving analogy questions on natural images.", "We present an approach for image retrieval using a very large number of highly selective features and efficient learning of queries. Our approach is predicated on the assumption that each image is generated by a sparse set of visual “causes” and that images which are visually similar share causes. We propose a mechanism for computing a very large number of highly selective features which capture some aspects of this causal structure (in our implementation there are over 46,000 highly selective features). At query time a user selects a few example images, and the AdaBoost algorithm is used to learn a classification function which depends on a small number of the most appropriate features. This yields a highly efficient classification function. In addition we show that the AdaBoost framework provides a natural mechanism for the incorporation of relevance feedback. Finally we show results on a wide variety of image queries." ] }
1612.02534
2585034924
Measuring visual similarity is critical for image understanding. But what makes two images similar? Most existing work on visual similarity assumes that images are similar because they contain the same object instance or category. However, the reason why images are similar is much more complex. For example, from the perspective of category, a black dog image is similar to a white dog image. However, in terms of color, a black dog image is more similar to a black horse image than the white dog image. This example serves to illustrate that visual similarity is ambiguous but can be made precise when given an explicit contextual perspective. Based on this observation, we propose the concept of contextual visual similarity. To be concrete, we examine the concept of contextual visual similarity in the application domain of image search. Instead of providing only a single image for image similarity search ( , Google image search), we require three images. Given a query image, a second positive image and a third negative image, dissimilar to the first two images, we define a contextualized similarity search criteria. In particular, we learn feature weights over all the feature dimensions of each image such that the distance between the query image and the positive image is small and their distances to the negative image are large after reweighting their features. The learned feature weights encode the contextualized visual similarity specified by the user and can be used for attribute specific image search. We also show the usefulness of our contextualized similarity weighting scheme for different tasks, such as answering visual analogy questions and unsupervised attribute discovery.
Attribute discovery has been studied in recent work @cite_19 @cite_12 @cite_6 @cite_21 . @cite_19 demonstrates that it is possible to identify attribute vocabularies and learn to recognize attributes automatically by mining text and image data collected from the Internet. @cite_21 proposes to automatically discover visual attributes from a noisy collection of image-text data by exploiting the relationship between attributes and neural activations in the deep network. @cite_6 proposes a novel training procedure with CNNs to discover multiple visual attributes in images in a weakly supervised scenario. All these works leverage textual description or partial attribute labels of images, whereas our approach does not require such side information. @cite_23 proposes a novel way of representing images as binary codes that balances discrimination and learnability of the codes. They show that their codes can be thought of as attributes. In contrast, we observe that one meaningful contextual visual similarity relationship entails semantic attributes and we propose to discover attributes by clustering contextual visual similarity relationships.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_12" ], "mid": [ "2482018248", "1918392599", "1528802670", "1953590900", "" ], "abstract": [ "How can a machine learn to recognize visual attributes emerging out of online community without a definitive supervised dataset? This paper proposes an automatic approach to discover and analyze visual attributes from a noisy collection of image-text data on the Web. Our approach is based on the relationship between attributes and neural activations in the deep network. We characterize the visual property of the attribute word as a divergence within weakly-annotated set of images. We show that the neural activations are useful for discovering and learning a classifier that well agrees with human perception from the noisy real-world Web data. The empirical study suggests the layered structure of the deep neural networks also gives us insights into the perceptual depth of the given word. Finally, we demonstrate that we can utilize highly-activating neurons for finding semantically relevant regions.", "Most of the approaches for discovering visual attributes in images demand significant supervision, which is cumbersome to obtain. In this paper, we aim to discover visual attributes in a weakly supervised setting that is commonly encountered with contemporary image search engines.", "It is common to use domain specific terminology - attributes - to describe the visual appearance of objects. In order to scale the use of these describable visual attributes to a large number of categories, especially those not well studied by psychologists or linguists, it will be necessary to find alternative techniques for identifying attribute vocabularies and for learning to recognize attributes without hand labeled training data. We demonstrate that it is possible to accomplish both these tasks automatically by mining text and image data sampled from the Internet. The proposed approach also characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape. This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description.", "We present images with binary codes in a way that balances discrimination and learnability of the codes. In our method, each image claims its own code in a way that maintains discrimination while being predictable from visual data. Category memberships are usually good proxies for visual similarity but should not be enforced as a hard constraint. Our method learns codes that maximize separability of categories unless there is strong visual evidence against it. Simple linear SVMs can achieve state-of-the-art results with our short codes. In fact, our method produces state-of-the-art results on Caltech256 with only 128-dimensional bit vectors and outperforms state of the art by using longer codes. We also evaluate our method on ImageNet and show that our method outperforms state-of-the-art binary code methods on this large scale dataset. Lastly, our codes can discover a discriminative set of attributes.", "" ] }
1612.02466
2583303407
We consider wireless sensor networks secured by the heterogeneous random key predistribution scheme under an on off channel model. The heterogeneous random key predistribution scheme considers the case when the network includes sensor nodes with varying levels of resources, features, or connectivity requirements; e.g., regular nodes vs. cluster heads, but does not incorporate the fact that wireless channel are unreliable. To capture the unreliability of the wireless medium, we use an on off channel model; wherein, each wireless channel is either on (with probability @math ) or off (with probability @math ) independently. We present conditions (in the form of zero-one laws) on how to scale the parameters of the network model so that with high probability the network is @math -connected, i.e., the network remains connected even if any @math nodes fail or leave the network. We also present numerical results to support these conditions in the finite-node regime.
Our paper completes the analysis that we started in @cite_24 concerning the minimum node degree of the intersection model @math . There, we presented conditions on how to scale the parameters of the intersection model @math so that the minimum node degree is no less than @math , with high probability when the number of nodes @math gets large. It is clear that a graph can not be @math -connected if its minimum node degree is less than @math . Thus, we readily obtain the zero-law of Theorem by virtue of the zero-law of the minimum node degree being no less than @math [Theorem 3.1] EletrebyISIT . Our paper completes the analysis of @cite_24 by means of establishing the one-law of @math -connectivity; thus, obtaining a fuller understanding of the properties of the intersection model @math . In particular, it was conjectured in [Conjecture 3.2] EletrebyISIT , that under some additional conditions, the zero-one laws for @math -connectivity would resemble those for the minimum node degree being no less than @math . In our paper, we prove that this conjecture is correct and provide the extra conditions needed for it to hold.
{ "cite_N": [ "@cite_24" ], "mid": [ "2512707330" ], "abstract": [ "We consider wireless sensor networks under a heterogeneous random key predistribution scheme and an on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yagan - as an extension to the Eschenauer and Gligor scheme - for the cases when the network consists of sensor nodes with varying level of resources and or connectivity requirements, e.g., regular nodes vs. cluster heads. The network is modeled by the intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Renyi graph (induced by the on off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support our results in the finite-node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring the minimum node degree being no less than k." ] }
1612.02466
2583303407
We consider wireless sensor networks secured by the heterogeneous random key predistribution scheme under an on off channel model. The heterogeneous random key predistribution scheme considers the case when the network includes sensor nodes with varying levels of resources, features, or connectivity requirements; e.g., regular nodes vs. cluster heads, but does not incorporate the fact that wireless channel are unreliable. To capture the unreliability of the wireless medium, we use an on off channel model; wherein, each wireless channel is either on (with probability @math ) or off (with probability @math ) independently. We present conditions (in the form of zero-one laws) on how to scale the parameters of the network model so that with high probability the network is @math -connected, i.e., the network remains connected even if any @math nodes fail or leave the network. We also present numerical results to support these conditions in the finite-node regime.
Our results also extend the work by @cite_7 on the homogeneous random key graph intersecting ER graph to the heterogeneous setting. There, a zero-one law for the property that the graph is @math -connected was established for @math . Considering Theorem and setting @math , i.e., when all nodes belong to the same class and thus receive the same number @math of keys, our results recover Theorem 2 of (See [Theorems 2] Jun K-Connectivity ).
{ "cite_N": [ "@cite_7" ], "mid": [ "2107866581" ], "abstract": [ "Random key graphs form a class of random intersection graphs that are naturally induced by the random key predistribution scheme of Eschenauer and Gligor for securing wireless sensor network (WSN) communications. Random key graphs have received much attention recently, owing in part to their wide applicability in various domains, including recommender systems, social networks, secure sensor networks, clustering and classification analysis, and cryptanalysis to name a few. In this paper, we study connectivity properties of random key graphs in the presence of unreliable links. Unreliability of graph links is captured by independent Bernoulli random variables, rendering them to be on or off independently from each other. The resulting model is an intersection of a random key graph and an Erdős–Renyi graph, and is expected to be useful in capturing various real-world networks; e.g., with secure WSN applications in mind, link unreliability can be attributed to harsh environmental conditions severely impairing transmissions. We present conditions on how to scale this model’s parameters so that: 1) the minimum node degree in the graph is at least @math and 2) the graph is @math -connected, both with high probability as the number of nodes becomes large. The results are given in the form of zero-one laws with critical thresholds identified and shown to coincide for both graph properties. These findings improve the previous results by Rybarczyk on @math -connectivity of random key graphs (with reliable links), as well as the zero-one laws by Yagan on one-connectivity of random key graphs with unreliable links." ] }
1612.02466
2583303407
We consider wireless sensor networks secured by the heterogeneous random key predistribution scheme under an on off channel model. The heterogeneous random key predistribution scheme considers the case when the network includes sensor nodes with varying levels of resources, features, or connectivity requirements; e.g., regular nodes vs. cluster heads, but does not incorporate the fact that wireless channel are unreliable. To capture the unreliability of the wireless medium, we use an on off channel model; wherein, each wireless channel is either on (with probability @math ) or off (with probability @math ) independently. We present conditions (in the form of zero-one laws) on how to scale the parameters of the network model so that with high probability the network is @math -connected, i.e., the network remains connected even if any @math nodes fail or leave the network. We also present numerical results to support these conditions in the finite-node regime.
In @cite_5 , Ya g an established a zero-one law for @math -connectivity for the inhomogeneous random key graph @math under full visibility; i.e., when all pairs of nodes have a reliable communication channel in between. Our paper extends these results by considering more practical WSN scenarios where the unreliability of wireless communication channels are taken into account through the on off channel model. Also, we investigate the @math -connectivity of the network for any non-negative constant integer @math ; i.e., by setting @math and @math for each @math , we recover Theorem 2 in @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2196704028" ], "abstract": [ "We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph." ] }
1612.02466
2583303407
We consider wireless sensor networks secured by the heterogeneous random key predistribution scheme under an on off channel model. The heterogeneous random key predistribution scheme considers the case when the network includes sensor nodes with varying levels of resources, features, or connectivity requirements; e.g., regular nodes vs. cluster heads, but does not incorporate the fact that wireless channel are unreliable. To capture the unreliability of the wireless medium, we use an on off channel model; wherein, each wireless channel is either on (with probability @math ) or off (with probability @math ) independently. We present conditions (in the form of zero-one laws) on how to scale the parameters of the network model so that with high probability the network is @math -connected, i.e., the network remains connected even if any @math nodes fail or leave the network. We also present numerical results to support these conditions in the finite-node regime.
Finally, our work improves upon the results by @cite_19 who established zero-one laws for the @math -connectivity of inhomogeneous random key graphs (therein, this model was referred to as the general random intersection graph). Indeed, by setting @math for each @math , our result recovers the results in @cite_19 . We also remark that the additional conditions required by main results of @cite_19 render them inapplicable in practical WSN implementations. This issue is explained in details in [Section 3.3] Yagan Inhomogeneous .
{ "cite_N": [ "@cite_19" ], "mid": [ "2060271434" ], "abstract": [ "Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature." ] }
1612.02468
2952994969
The base motivation of Mobile Cloud Computing was empowering mobile devices by application offloading onto powerful cloud resources. However, this goal can't entirely be reached because of the high offloading cost imposed by the long physical distance between the mobile device and the cloud. To address this issue, we propose an application offloading onto a nearby mobile cloud composed of the mobile devices in the vicinity-a Spontaneous Proximity Cloud. We introduce our proposed dynamic, ant-inspired, bi-objective offloading middleware-ACOMMA, and explain its extension to perform a close mobile application offloading. With the learning-based offloading decision-making process of ACOMMA, combined to the collaborative resource sharing, the mobile devices can cooperate for decision cache sharing. We evaluate the performance of ACOMMA in collaborative mode with real benchmarks Face Recognition and Monte-Carlo algorithms-and achieve 50 execution time gain.
Few studies focused on the use of adjacent mobile devices as offloading surrogates. Transient cloud @cite_6 uses the collective capabilities of nearby devices in an ad-hoc network to meet the needs of the mobile device. A modified Hungarian method is applied as an assignment algorithm to assign tasks to devices that are to be run according to their abilities. The execution of each task by any device imposes some cost, and the assignment algorithm aims to find the minimum total cost assignment. To that end, @cite_6 has proposed a dynamic cost adjustment to balance the tasks based on costs between devices. @cite_1 proposed an architecture named mCloud that runs resource-intensive applications on collections of cooperating mobile devices and discuss its advantages. @cite_7 have gone one step further and formulated a decision algorithm for global adaptive offloading. They implemented the program components on mobile devices set to optimise Time to Failure (TTF) while taking into account the limitations of the effectiveness of the program. Having highlighted the benefits of collaboration for mobile task offloading, also implemented computational offloading schemes to maximise the longevity of mobile devices @cite_2 @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_2" ], "mid": [ "1658697598", "2146848644", "2144058403", "2019153514", "" ], "abstract": [ "Mobile phones are set to become the universal interface to online services and cloud computing applications. However, using them for this purpose today is limited to two configurations: applications either run on the phone or run on the server and are remotely accessed by the phone. These two options do not allow for a customized and flexible service interaction, limiting the possibilities for performance optimization as well. In this paper we present a middleware platform that can automatically distribute different layers of an application between the phone and the server, and optimize a variety of objective functions (latency, data transferred, cost, etc.). Our approach builds on existing technology for distributed module management and does not require new infrastructures. In the paper we discuss how to model applications as a consumption graph, and how to process it with a number of novel algorithms to find the optimal distribution of the application modules. The application is then dynamically deployed on the phone in an efficient and transparent manner. We have tested and validated our approach with extensive experiments and with two different applications. The results indicate that the techniques we propose can significantly optimize the performance of cloud applications when used from mobile phones.", "Despite the increased capabilities of mobile devices, mobile application resource requirements can often transcend what can be accomplished on a single device. This has been addressed through several proposals for efficient computation offloading from mobile devices to remote cloud resources or closely located computing resources known as cloudlets. In this paper we consider an environment in which computational offloading is performed among a set of mobile devices. We call this environment a Mobile Device Cloud (MDC). We are interested in MDCs where nodes are highly collaborative . We develop computational offloading schemes that maximize the lifetime of the ensemble of mobile devices where we consider the network to be alive as long as no device has depleted its battery. As a secondary contribution in this work, we develop and use an experimentation platform that allows us to evaluate a range of computational models and profiles derived from a realistic testbed. We use this platform as a first step in an evaluation exercise that demonstrates the effectiveness of our computation offloading algorithms in extending the lifetime of an MDC.", "When we think of mobile cloud computing today, we typically refer to empowering mobile devices - in particular smartphones and tablets - with the capabilities of stationary resources residing in giant data centers. But what happens when these mobile devices become as powerful as our personal computers or more? This paper presents our vision of a future in which mobile devices become a core component of mobile cloud computing architectures. We envision a world where mobile devices will be capable of forming mobile clouds, or mClouds, to accomplish tasks locally without relying, when possible, on costly and, sometimes, inefficient backend communication. We discuss a possible mClouds architecture, its benefits and tradeoffs, and the user incentive scheme to support the mCloud design.", "Mobile devices are evolving into powerful systems due to recent advances in their communication, storage and computation technologies. They are poised to play a key role in providing a rich collaborative computing platform for various applications. This paper proposes \"Transient Clouds\" — a collaborative computing platform that allows nearby devices to form an ad-hoc network and provide various capabilities as cloud services. Transient Clouds utilize the collective capabilities of the devices present, along with their social and context awareness that cannot be provided efficiently by the traditional clouds. We present a modified algorithm of the Hungarian method for assigning tasks to devices in order to achieve various goals (e.g., load balancing, collocating executions, etc.). We evaluate the performance of our proposed algorithms through simulation and provide a real implementation on the Android platform using the Wi-Fi Direct framework. We envision Transient Clouds to be utilized in temporal scenarios in which the cloud is created on-the-fly by the devices present in an environment and would disappear as the devices leave the network.", "" ] }
1612.02742
2951803737
Hand detection is essential for many hand related tasks, e.g. parsing hand pose, understanding gesture, which are extremely useful for robotics and human-computer interaction. However, hand detection in uncontrolled environments is challenging due to the flexibility of wrist joint and cluttered background. We propose a deep learning based approach which detects hands and calibrates in-plane rotation under supervision at the same time. To guarantee the recall, we propose a context aware proposal generation algorithm which significantly outperforms the selective search. We then design a convolutional neural network(CNN) which handles object rotation explicitly to jointly solve the object detection and rotation estimation tasks. Experiments show that our method achieves better results than state-of-the-art detection models on widely-used benchmarks such as Oxford and Egohands database. We further show that rotation estimation and classification can mutually benefit each other.
These methods build a skin model with either Gaussian mixture models @cite_28 , or using prior knowledge of skin color from face detection @cite_4 . However, these methods often fail to apply to hand detection in general conditions due to the fact that complex illuminations often lead to large variations in skin color and make the skin color modelling problem challenging.
{ "cite_N": [ "@cite_28", "@cite_4" ], "mid": [ "2130462770", "1761390164" ], "abstract": [ "A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin-color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and predictions of the Markov model. The evolution of the skin-color distribution at each frame is parameterized by translation, scaling, and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and resampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using maximum likelihood estimation and also evolve over time. The accuracy of the new dynamic skin color segmentation algorithm is compared to that obtained via a static color model. Segmentation accuracy is evaluated using labeled ground-truth video sequences taken from staged experiments and popular movies. An overall increase in segmentation accuracy of up to 24 percent is observed in 17 out of 21 test sequences. In all but one case, the skin-color classification rates for our system were higher, with background classification rates comparable to those of the static segmentation.", "This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,Mike.J.Jonesg@compaq.com c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http: crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA" ] }
1612.02742
2951803737
Hand detection is essential for many hand related tasks, e.g. parsing hand pose, understanding gesture, which are extremely useful for robotics and human-computer interaction. However, hand detection in uncontrolled environments is challenging due to the flexibility of wrist joint and cluttered background. We propose a deep learning based approach which detects hands and calibrates in-plane rotation under supervision at the same time. To guarantee the recall, we propose a context aware proposal generation algorithm which significantly outperforms the selective search. We then design a convolutional neural network(CNN) which handles object rotation explicitly to jointly solve the object detection and rotation estimation tasks. Experiments show that our method achieves better results than state-of-the-art detection models on widely-used benchmarks such as Oxford and Egohands database. We further show that rotation estimation and classification can mutually benefit each other.
These methods usually learn a hand template or a mixture of deformable part models. They can be implemented by Harr features like Viola and Jones cascade detectors @cite_20 , HOG-SVM pipeline @cite_20 , mixtures of deformable part models(DPM) @cite_10 . A limitation of these methods is their use of weak features (usually HOG or Harr features). There are also methods that detects human hand as a part of human structure, which uses the human pictorial structure as spatial context for hand position @cite_2 . However, these methods require most parts of human are visible, and occlusion of body parts makes hand detection difficult @cite_12 .
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_20", "@cite_2" ], "mid": [ "2000761799", "2071300943", "2161969291", "2113228424" ], "abstract": [ "", "Occlusion poses a significant difficulty for object recognition due to the combinatorial diversity of possible occlusion patterns. We take a strongly supervised, non-parametric approach to modeling occlusion by learning deformable models with many local part mixture templates using large quantities of synthetically generated training data. This allows the model to learn the appearance of different occlusion patterns including figure-ground cues such as the shapes of occluding contours as well as the co-occurrence statistics of occlusion between neighboring parts. The underlying part mixture-structure also allows the model to capture coherence of object support masks between neighboring parts and make compelling predictions of figure-ground-occluder segmentations. We test the resulting model on human pose estimation under heavy occlusion and find it produces improved localization accuracy.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Detecting an object part relies on two sources of information - the appearance of the part itself and the context supplied by surrounding parts. In this paper we consider problems in which a target part cannot be recognized reliably using its own appearance, such as detecting low-resolution hands, and must be recognized using the context of surrounding parts. We develop the ‘chains model’ which can locate parts of interest in a robust and precise manner, even when the surrounding context is highly variable and deformable. In the proposed model, the relation between context features and the target part is modeled in a non-parametric manner using an ensemble of feature chains leading from parts in the context to the detection target. The method uses the configuration of the features in the image directly rather than through fitting an articulated 3-D model of the object. In addition, the chains are composable, meaning that new chains observed in the test image can be composed of sub-chains seen during training. Consequently, the model is capable of handling object poses which are infrequent, even non-existent, during training. We test the approach in different settings, including object parts detection, as well as complete object detection. The results show the advantages of the chains model for detecting and localizing parts of complex deformable objects." ] }