aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1811.09575
|
2901040731
|
In recent years, the sequence-to-sequence learning neural networks with attention mechanism have achieved great progress. However, there are still challenges, especially for Neural Machine Translation (NMT), such as lower translation quality on long sentences. In this paper, we present a hierarchical deep neural network architecture to improve the quality of long sentences translation. The proposed network embeds sequence-to-sequence neural networks into a two-level category hierarchy by following the coarse-to-fine paradigm. Long sentences are input by splitting them into shorter sequences, which can be well processed by the coarse category network as the long distance dependencies for short sentences is able to be handled by network based on sequence-to-sequence neural network. Then they are concatenated and corrected by the fine category network. The experiments shows that our method can achieve superior results with higher BLEU(Bilingual Evaluation Understudy) scores, lower perplexity and better performance in imitating expression style and words usage than the traditional networks.
|
One year later in 2014, a method called Sequence-to-Sequence learning with encoders and decoders @cite_17 @cite_0 as well as long short-term memory (LSTM) @cite_2 are introduced for NMT. With the help of the gate mechanism, the “gradient explosion disappearance” problem is controlled so that the model can obtain far longer sentences “long-distance dependence”.
|
{
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_17"
],
"mid": [
"2950635152",
"",
"2949888546"
],
"abstract": [
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
}
|
1811.09575
|
2901040731
|
In recent years, the sequence-to-sequence learning neural networks with attention mechanism have achieved great progress. However, there are still challenges, especially for Neural Machine Translation (NMT), such as lower translation quality on long sentences. In this paper, we present a hierarchical deep neural network architecture to improve the quality of long sentences translation. The proposed network embeds sequence-to-sequence neural networks into a two-level category hierarchy by following the coarse-to-fine paradigm. Long sentences are input by splitting them into shorter sequences, which can be well processed by the coarse category network as the long distance dependencies for short sentences is able to be handled by network based on sequence-to-sequence neural network. Then they are concatenated and corrected by the fine category network. The experiments shows that our method can achieve superior results with higher BLEU(Bilingual Evaluation Understudy) scores, lower perplexity and better performance in imitating expression style and words usage than the traditional networks.
|
At the same time the problem of NMT turns into a "fixed-length vector" problem: regardless of the length of the source sentence, it will be compressed into a fixed-length vector by this neural network needs, bringing more complexity and uncertainty in the decoding process, especially when the source sentence is very long @cite_0 .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2950635152"
],
"abstract": [
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
}
|
1811.09575
|
2901040731
|
In recent years, the sequence-to-sequence learning neural networks with attention mechanism have achieved great progress. However, there are still challenges, especially for Neural Machine Translation (NMT), such as lower translation quality on long sentences. In this paper, we present a hierarchical deep neural network architecture to improve the quality of long sentences translation. The proposed network embeds sequence-to-sequence neural networks into a two-level category hierarchy by following the coarse-to-fine paradigm. Long sentences are input by splitting them into shorter sequences, which can be well processed by the coarse category network as the long distance dependencies for short sentences is able to be handled by network based on sequence-to-sequence neural network. Then they are concatenated and corrected by the fine category network. The experiments shows that our method can achieve superior results with higher BLEU(Bilingual Evaluation Understudy) scores, lower perplexity and better performance in imitating expression style and words usage than the traditional networks.
|
Since 2014, "attention mechanism" for NMT @cite_13 is introduced for solving the "fixed length vector" problem. When the decoder generates a word for constructing a target sentence, only a small portion of the source sentence is relevant; therefore, a content-based attention mechanism can be applied to dynamically generate a weighted context vector based on the source sentence and the network then predicts words based on this context vector rather than a fixed-length vector. Since then, the performance of NMT has been significantly improved. Encoder-Decoder Neural Network with Attention Mechanism has become the best model in the NMT field and given the state-of-art performance.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2133564696"
],
"abstract": [
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
}
|
1811.09575
|
2901040731
|
In recent years, the sequence-to-sequence learning neural networks with attention mechanism have achieved great progress. However, there are still challenges, especially for Neural Machine Translation (NMT), such as lower translation quality on long sentences. In this paper, we present a hierarchical deep neural network architecture to improve the quality of long sentences translation. The proposed network embeds sequence-to-sequence neural networks into a two-level category hierarchy by following the coarse-to-fine paradigm. Long sentences are input by splitting them into shorter sequences, which can be well processed by the coarse category network as the long distance dependencies for short sentences is able to be handled by network based on sequence-to-sequence neural network. Then they are concatenated and corrected by the fine category network. The experiments shows that our method can achieve superior results with higher BLEU(Bilingual Evaluation Understudy) scores, lower perplexity and better performance in imitating expression style and words usage than the traditional networks.
|
In the meanwhile, there are also other network structures for machine translation. In 2017, Facebook's Artificial Intelligence Research Institute (FAIR) announced that they use CNN to solve translation problem, which can achieve performance similar to RNN-based NMT @cite_20 @cite_11 , but at a speed that is 9 times faster. In response, Google released a completely new model Transformer which only based on attention mechanism @cite_10 in June. This model neither uses CNN nor uses RNN, but is entirely based on the attention mechanism. Inspiring by Generative Adversarial Networks(GANs) @cite_19 , for the first time, the method of generative adversarial learning was introduced into the field of machine translation, and a new machine translation learning model based on generative adversarial learning and deep reinforcement learning was proposed.
|
{
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2099471712",
"2626778328",
"2950855294",
"2613904329"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.",
"The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU."
]
}
|
1811.09575
|
2901040731
|
In recent years, the sequence-to-sequence learning neural networks with attention mechanism have achieved great progress. However, there are still challenges, especially for Neural Machine Translation (NMT), such as lower translation quality on long sentences. In this paper, we present a hierarchical deep neural network architecture to improve the quality of long sentences translation. The proposed network embeds sequence-to-sequence neural networks into a two-level category hierarchy by following the coarse-to-fine paradigm. Long sentences are input by splitting them into shorter sequences, which can be well processed by the coarse category network as the long distance dependencies for short sentences is able to be handled by network based on sequence-to-sequence neural network. Then they are concatenated and corrected by the fine category network. The experiments shows that our method can achieve superior results with higher BLEU(Bilingual Evaluation Understudy) scores, lower perplexity and better performance in imitating expression style and words usage than the traditional networks.
|
Due to the limitation of parallel data, making use of monolingual data can boost translation performance. Recently, researchers are exploring unsupervised methods for machine translation @cite_23 @cite_14 , which relay on the combination of parallel data and monolingual data or only relay on monolingual data. Keys for unsupervised learning is, firstly initializing model with bilingual dictionary. Secondly, establishing a denoising self-encoder @cite_9 to learn useful information from input data. Finally, via back-translation @cite_22 , generates sentences pairs and turns unsupervised learning into supervised learning.
|
{
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_22",
"@cite_23"
],
"mid": [
"2025768430",
"2765961751",
"2963216553",
"2766182427"
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores up to 32.8, without using even a single parallel sentence at training time.",
"",
"In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project."
]
}
|
1811.09012
|
2963230212
|
In this work we propose a novel approach to remove undesired objects from RGB-D sequences captured with freely moving cameras, which enables static 3D reconstruction. Our method jointly uses existing information from multiple frames as well as generates new one via inpainting techniques. We use balanced rules to select source frames; local homography based image warping method for alignment and Markov random field (MRF) based approach for combining existing information. For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence. Experiments show that our approach is qualified for removing the undesired objects and inpainting the holes.
|
Exemplar based methods are popular in single image inpainting for their texture-protection ability. The beginning of them can be traced to the work of Criminisi @cite_38 , in which the mask is completed via searching for similar patches from the rest region and inherently copying them. The PatchMatch algorithm proposed by Barnes @cite_14 uses random search for quickly finding approximate nearest neighbor matches between patches, which is widely employed as the basis in the follow-up work for its several orders higher time efficiency. Kawai @cite_30 extend the energy function by taking into account of brightness changes and spatial locality of texture to deal with unnatural matches. Lee @cite_32 propose to take Laplacian pyramid as an error term in patch synthesis in order to protect edges. In summary, the single image inpainting approaches leverage information from the image itself. In contrast, we use the other frames as additional sources.
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_14",
"@cite_32"
],
"mid": [
"1544005643",
"2105038642",
"1993120651",
"2463205272"
],
"abstract": [
"Image inpainting techniques have been widely investigated to remove undesired objects in an image. Conventionally, missing parts in an image are completed by optimizing the objective function using pattern similarity. However, unnatural textures are easily generated due to two factors: (1) available samples in the image are quite limited, and (2) pattern similarity is one of the required conditions but is not sufficient for reproducing natural textures. In this paper, we propose a new energy function based on the pattern similarity considering brightness changes of sample textures (for (1)) and introducing spatial locality as an additional constraint (for (2)). The effectiveness of the proposed method is successfully demonstrated by qualitative and quantitative evaluation. Furthermore, the evaluation methods used in much inpainting research are discussed.",
"A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) \"texture synthesis\" algorithms for generating large image regions from sample textures and 2) \"inpainting\" techniques for filling in small image gaps. The former has been demonstrated for \"textures\"-repeating two-dimensional patterns with some stochasticity; the latter focus on linear \"structures\" which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single , efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques.",
"This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.",
"Patch-based image synthesis has been enriched with global optimization on the image pyramid. Successively, the gradient-based synthesis has improved structural coherence and details. However, the gradient operator is directional and inconsistent and requires computing multiple operators. It also introduces a significantly heavy computational burden to solve the Poisson equation that often accompanies artifacts in non-integrable gradient fields. In this paper, we propose a patch-based synthesis using a Laplacian pyramid to improve searching correspondence with enhanced awareness of edge structures. Contrary to the gradient operators, the Laplacian pyramid has the advantage of being isotropic in detecting changes to provide more consistent performance in decomposing the base structure and the detailed localization. Furthermore, it does not require heavy computation as it employs approximation by the differences of Gaussians. We examine the potentials of the Laplacian pyramid for enhanced edge-aware correspondence search. We demonstrate the effectiveness of the Laplacian-based approach over the state-of-the-art patchbased image synthesis methods."
]
}
|
1811.09012
|
2963230212
|
In this work we propose a novel approach to remove undesired objects from RGB-D sequences captured with freely moving cameras, which enables static 3D reconstruction. Our method jointly uses existing information from multiple frames as well as generates new one via inpainting techniques. We use balanced rules to select source frames; local homography based image warping method for alignment and Markov random field (MRF) based approach for combining existing information. For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence. Experiments show that our approach is qualified for removing the undesired objects and inpainting the holes.
|
Video completion aims at dealing with color image sequences. Some work requires manual interaction: Klose @cite_16 propose to inpaint a given video by using SfM and manually drawing 3D masks. Other methods either completely copy information from other frames or generate new textures by searching from them: Granados @cite_13 enable free movement of camera via using multiple homographies to estimate the geometric registration between frames; whose applications are limited for being required to satisfy the assumption that the missing pixels on the target frame can be completely achieved from the others. On the other hand, Newson @cite_5 propose an exemplar based method to search for similar patches on a group of aligned source frames, which is pretty time consuming for minimizing a global energy function. Similarly, Ebdelli @cite_34 shrink the searching range by only considering a small number of aligned neighboring frames of the target one. Differently with these methods that handle color videos, our approach is designed for RGB-D sequences; Also we take advantages from both direct copying and multi-view searching and hence more suitable for highly textured scenes.
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_34"
],
"mid": [
"2069237980",
"1990556043",
"1487937094",
"2296495263"
],
"abstract": [
"We propose an automatic video inpainting algorithm which relies on the optimization of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects, and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high-definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask and can deal with a wider variety of situations than is handled by previous work.",
"Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this \"scene-space\" information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we propose a novel pixel gathering and filtering approach. The gathering step is general and collects pixel samples in scene-space, while the filtering step is application-specific and computes a desired output video from the gathered sample sets. Our approach is easily parallelizable and has been implemented on GPU, allowing us to take full advantage of large volumes of video data and facilitating practical runtimes on HD video using a standard desktop computer. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, deblurring, super resolution, object removal, computational shutter functions, and other scene-space camera effects. We present results for various casually captured, hand-held, moving, compressed, monocular videos depicting challenging scenes recorded in uncontrolled environments.",
"We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps.",
"In this paper, we propose a new video inpainting method which applies to both static or free-moving camera videos. The method can be used for object removal, error concealment, and background reconstruction applications. To limit the computational time, a frame is inpainted by considering a small number of neighboring pictures which are grouped into a group of pictures (GoP). More specifically, to inpaint a frame, the method starts by aligning all the frames of the GoP. This is achieved by a region-based homography computation method which allows us to strengthen the spatial consistency of aligned frames. Then, from the stack of aligned frames, an energy function based on both spatial and temporal coherency terms is globally minimized. This energy function is efficient enough to provide high quality results even when the number of pictures in the GoP is rather small, e.g. 20 neighboring frames. This drastically reduces the algorithm complexity and makes the approach well suited for near real-time video editing applications as well as for loss concealment applications. Experiments with several challenging video sequences show that the proposed method provides visually pleasing results for object removal, error concealment, and background reconstruction context."
]
}
|
1811.09012
|
2963230212
|
In this work we propose a novel approach to remove undesired objects from RGB-D sequences captured with freely moving cameras, which enables static 3D reconstruction. Our method jointly uses existing information from multiple frames as well as generates new one via inpainting techniques. We use balanced rules to select source frames; local homography based image warping method for alignment and Markov random field (MRF) based approach for combining existing information. For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence. Experiments show that our approach is qualified for removing the undesired objects and inpainting the holes.
|
Multi-view inpainting techniques leverage information from multiple source frames. Hays and Efros @cite_21 gather photos from Internet as a huge database to help with image completion. Similarly, Whyte @cite_24 cover an undesired region on the query image with Internet photographs of the same scene, in which multi-homography and photometric registration are used to achieve geometric registration between the query image and the source ones. Also a Markov random field optimization @cite_37 is employed for selecting the optimal proposals. This kind of methods are pretty unstable since the masked objects are easy to be reintroduced as a result of lacking necessary means to filer the source information.
|
{
"cite_N": [
"@cite_24",
"@cite_37",
"@cite_21"
],
"mid": [
"1997903019",
"2001933992",
"2171011251"
],
"abstract": [
"Often when we review our holiday photos, we notice things we wish we could have avoided, such as vehicles, construction work, or simply other tourists. We cannot go back and retake the photo, so what can we do if we want to remove these things from our photos? We want to replace these sections of the image in a convincing way, preferably with what would really have been seen there without the occlusions. Previous work on this problem, often referred to as “inpainting”, are mainly applicable to small image regions, and rely largely on models of the local behaviour of natural images, e.g. [3, 5]. Recently, replacing large occlusions in photographs has been approached using images from the Internet [2, 7] or by combining several images captured at approximately the same time [1, 9]. In this paper we leverage recent advances in viewpoint invariant image search [8] to find other images of the same scene on the Internet. Beginning with a query image containing a target region to be replaced, we first use an online image search engine to retrieve images of the same scene, and take these to be a set of oracles. Since these images may have significant variations in viewpoint and lighting, we register each oracle to the query image using multiple homographies and a simple global photometric correction. We then use each oracle to propose a solution, by copying image data into the target region using Poisson blending. Finally, we use a Markov random field (MRF) formulation to combine the proposals into a single, occlusion-free result. Figure 1 shows the main stages of our system, and compares our result to those of two other methods.",
"We describe an interactive, computer-assisted framework for combining parts of a set of photographs into a single composite picture, a process we call \"digital photomontage.\" Our framework makes use of two techniques primarily: graph-cut optimization, to choose good seams within the constituent images so that they can be combined as seamlessly as possible; and gradient-domain fusion, a process based on Poisson equations, to further reduce any remaining visible artifacts in the composite. Also central to the framework is a suite of interactive tools that allow the user to specify a variety of high-level image objectives, either globally across the image, or locally through a painting-style interface. Image objectives are applied independently at each pixel location and generally involve a function of the pixel values (such as \"maximum contrast\") drawn from that same location in the set of source images. Typically, a user applies a series of image objectives iteratively in order to create a finished composite. The power of this framework lies in its generality; we show how it can be used for a wide variety of applications, including \"selective composites\" (for instance, group photos in which everyone looks their best), relighting, extended depth of field, panoramic stitching, clean-plate production, stroboscopic visualization of movement, and time-lapse mosaics.",
"What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches."
]
}
|
1811.09012
|
2963230212
|
In this work we propose a novel approach to remove undesired objects from RGB-D sequences captured with freely moving cameras, which enables static 3D reconstruction. Our method jointly uses existing information from multiple frames as well as generates new one via inpainting techniques. We use balanced rules to select source frames; local homography based image warping method for alignment and Markov random field (MRF) based approach for combining existing information. For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence. Experiments show that our approach is qualified for removing the undesired objects and inpainting the holes.
|
Recent research begins to show interests on using the geometric connections among different views. Baek @cite_22 present a multi-view based method to complete the user-defined region by jointly inpaint the color and depth image, which takes advantages from SfM to achieve geometric registration among different views. Similarly, Thonat @cite_0 enable free-viewpoint image based rendering with reprojected information from neighboring views. Also a refined method is proposed in the following work @cite_10 that performs inpainting on intermediate, local planes in order to preserve perspective as well as to ensure multi-view coherence. In contrast, we use local homography for achieving pixel-wise correspondences, with which the information loss caused by SfM could be effectively avoided. Also they assume that the input images are of high quality, while in contrast, our approach aims at dealing with more common scenarios, such as those taken with moving cameras and hence glutted with blurriness.
|
{
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_22"
],
"mid": [
"",
"2794656430",
"2433236772"
],
"abstract": [
"",
"Image-Based Rendering (IBR) allows high-fidelity free-viewpoint navigation using only a set of photographs and 3D reconstruction as input. It is often necessary or convenient to remove objects from the captured scenes, allowing a form of scene editing for IBR. This requires multi-view inpainting of the input images. Previous methods suffer from several major limitations: they lack true multi-view coherence, resulting in artifacts such as blur, they do not preserve perspective during inpainting, provide inaccurate depth completion and can only handle scenes with a few tens of images. Our approach addresses these limitations by introducing a new multi-view method that performs inpainting in intermediate, locally common planes. Use of these planes results in correct perspective and multi-view coherence of inpainting results. For efficient treatment of large scenes, we present a fast planar region extraction method operating on small image clusters. We adapt the resolution of inpainting to that required in each input image of the multi-view dataset, and carefully handle image resampling between the input images and rectified planes. We show results on large indoors and outdoors environments.",
"We present a multiview image completion method that provides geometric consistency among different views by propagating space structures. Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region. The proposed method incorporates photographs to estimate dense depth maps. We initially complete color as well as depth from a view, and then facilitate two stages of structure propagation and structure-guided completion. Structure propagation optimizes space topology in the scene across photographs, while structure-guide completion enhances, and completes local image structure of both depth and color in multiple photographs with structural coherence by searching nearest neighbor fields in relevant views. We demonstrate the effectiveness of the proposed method in completing multiview images."
]
}
|
1811.09128
|
2900883569
|
Recognizing driver behaviors is becoming vital for in-vehicle systems that seek to reduce the incidence of car accidents rooted in cognitive distraction. In this paper, we harness the exceptional feature extraction abilities of deep learning and propose a dedicated Interwoven Deep Convolutional Neural Network (InterCNN) architecture to tackle the accurate classification of driver behaviors in real-time. The proposed solution exploits information from multi-stream inputs, i.e., in-vehicle cameras with different fields of view and optical flows computed based on recorded images, and merges through multiple fusion layers abstract features that it extracts. This builds a tight ensembling system, which significantly improves the robustness of the model. We further introduce a temporal voting scheme based on historical inference instances, in order to enhance accuracy. Experiments conducted with a real world dataset that we collect in a mock-up car environment demonstrate that the proposed InterCNN with MobileNet convolutional blocks can classify 9 different behaviors with 73.97 accuracy, and 5 aggregated behaviors with 81.66 accuracy. Our architecture is highly computationally efficient, as it performs inferences within 15ms, which satisfies the real-time constraints of intelligent cars. In addition, our InterCNN is robust to lossy input, as the classification remains accurate when two input streams are occluded.
|
The majority of the driver behavior classification systems are based on in-vehicle vision instruments (i.e., cameras or eye-tracking devices), which constantly monitor the movements of the driver @cite_10 . The core of such systems is therefore tasked with a computer vision problem, whose objective is to classify actions performed by drivers, using sequences of images acquired in real-time. Existing research can be categorized into two main classes: non deep learning approaches and deep learning approaches.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2540183746"
],
"abstract": [
"Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed."
]
}
|
1811.09128
|
2900883569
|
Recognizing driver behaviors is becoming vital for in-vehicle systems that seek to reduce the incidence of car accidents rooted in cognitive distraction. In this paper, we harness the exceptional feature extraction abilities of deep learning and propose a dedicated Interwoven Deep Convolutional Neural Network (InterCNN) architecture to tackle the accurate classification of driver behaviors in real-time. The proposed solution exploits information from multi-stream inputs, i.e., in-vehicle cameras with different fields of view and optical flows computed based on recorded images, and merges through multiple fusion layers abstract features that it extracts. This builds a tight ensembling system, which significantly improves the robustness of the model. We further introduce a temporal voting scheme based on historical inference instances, in order to enhance accuracy. Experiments conducted with a real world dataset that we collect in a mock-up car environment demonstrate that the proposed InterCNN with MobileNet convolutional blocks can classify 9 different behaviors with 73.97 accuracy, and 5 aggregated behaviors with 81.66 accuracy. Our architecture is highly computationally efficient, as it performs inferences within 15ms, which satisfies the real-time constraints of intelligent cars. In addition, our InterCNN is robust to lossy input, as the classification remains accurate when two input streams are occluded.
|
In @cite_36 , Liu employ Laplacian Support Vector Machine (SVM) and extreme learning machine techniques to detect drivers' distraction, using labelled data that captures vehicle dynamic and drivers' eye and head movements. Experiments show that this semi-supervised approach can achieve up to 97.2 Ragab compare the prediction accuracy of different machine learning methods in driving distraction detection @cite_7 , showing that Random Forests perform best and require only 0.05 s per inference. Liao consider drivers' distraction in two different scenarios, i.e., stop-controlled intersections and speed-limited highways @cite_25 . They design an SVM classifier operating with Recursive Feature Elimination (RFE) to detect driving distraction. The evaluation results suggest that by fusing eye movements and driving performance information, classification accuracy can be improved in stop-controlled intersection settings.
|
{
"cite_N": [
"@cite_36",
"@cite_25",
"@cite_7"
],
"mid": [
"2308230922",
"2343672207",
"208765980"
],
"abstract": [
"Real-time driver distraction detection is the core to many distraction countermeasures and fundamental for constructing a driver-centered driver assistance system. While data-driven methods demonstrate promising detection performance, a particular challenge is how to reduce the considerable cost for collecting labeled data. This paper explored semi-supervised methods for driver distraction detection in real driving conditions to alleviate the cost of labeling training data. Laplacian support vector machine and semi-supervised extreme learning machine were evaluated using eye and head movements to classify two driver states: attentive and cognitively distracted. With the additional unlabeled data, the semi-supervised learning methods improved the detection performance ( @math -mean) by 0.0245, on average, over all subjects, as compared with the traditional supervised methods. As unlabeled training data can be collected from drivers' naturalistic driving records with little extra resource, semi-supervised methods, which utilize both labeled and unlabeled data, can enhance the efficiency of model development in terms of time and cost.",
"Driver distraction has been identified as one major cause of unsafe driving. The existing studies on cognitive distraction detection mainly focused on high-speed driving situations, but less on low-speed traffic in urban driving. This paper presents a method for the detection of driver cognitive distraction at stop-controlled intersections and compares its feature subsets and classification accuracy with that on a speed-limited highway. In the simulator study, 27 subjects were recruited to participate. Driver cognitive distraction is induced by the clock task that taxes visuospatial working memory. The support vector machine (SVM) recursive feature elimination algorithm is used to extract an optimal feature subset out of features constructed from driving performance and eye movement. After feature extraction, the SVM classifier is trained and cross-validated within subjects. On average, the classifier based on the fusion of driving performance and eye movement yields the best correct rate and F-measure ( @math for stop-controlled intersections and @math for a speed-limited highway) among four types of the SVM model based on different candidate features. The comparisons of extracted optimal feature subsets and the SVM performance between two typical driving scenarios are presented.",
"Driver distraction and fatigue are considered the main cause of most car accidents today. This paper compares the performance of Random Forest and a number of other well-known classifiers for driver distraction detection and recognition problems. A non-intrusive system, which consists of hardware components for capturing the driver’s driving sessions on a car simulator, using infrared and Kinect cameras, combined with a software component for monitoring some visual behaviors that reflect a driver’s level of distraction, was used in this work."
]
}
|
1811.08965
|
2901139370
|
Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: this https URL.
|
FR has achieved significant progress from hand-crafted feature based methods @cite_8 @cite_3 @cite_18 to deep learning models @cite_23 @cite_16 @cite_37 @cite_46 @cite_4 . One main driving force behind recent advances is the availability of large sized FR benchmarks and datasets. Earlier FR benchmarks are small, consisting of a limited number of identities and images @cite_3 @cite_21 @cite_7 @cite_22 @cite_6 @cite_12 . Since 2007, the Labeled Faces in the Wild (LFW) @cite_43 has shifted the FR community towards recognising more unconstrained celebrity faces at larger scales. Since then, a number of large FR training datasets and test evaluation benchmarks have been introduced, such as VGGFace @cite_46 , CASIA @cite_25 , CelebA @cite_44 , MS-Celeb-1M @cite_20 , MegaFace @cite_23 , and MegaFace2 @cite_34 . Benefiting from large scale training data and deep learning techniques, the best FR model has achieved 99.087 the current largest 1:N face identification evaluation (with 1,000,000 distractors) MegaFace http: megaface.cs.washington.edu results facescrub.html .
|
{
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_44",
"@cite_43",
"@cite_34",
"@cite_23",
"@cite_46",
"@cite_16",
"@cite_20",
"@cite_25",
"@cite_12"
],
"mid": [
"2114588272",
"2963466847",
"2520774990",
"2133295669",
"",
"2163808566",
"2123921160",
"",
"2103560185",
"1834627138",
"1782590233",
"2610739092",
"2963671154",
"2325939864",
"1949778830",
"2515770085",
"1509966554",
"2155759509"
],
"abstract": [
"Making a high-dimensional (e.g., 100K-dim) feature for face recognition seems not a good idea because it will bring difficulties on consequent training, computation, and storage. This prevents further exploration of the use of a high dimensional feature. In this paper, we study the performance of a high dimensional feature. We first empirically show that high dimensionality is critical to high performance. A 100K-dim feature, based on a single-type Local Binary Pattern (LBP) descriptor, can achieve significant improvements over both its low-dimensional version and the state-of-the-art. We also make the high-dimensional feature practical. With our proposed sparse projection method, named rotated sparse regression, both computation and model storage can be reduced by over 100 times without sacrificing accuracy quality.",
"This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.",
"This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.",
"",
"This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed",
"We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.",
"",
"Recent work on face identification using continuous density Hidden Markov Models (HMMs) has shown that stochastic modelling can be used successfully to encode feature information. When frontal images of faces are sampled using top-bottom scanning, there is a natural order in which the features appear and this can be conveniently modelled using a top-bottom HMM. However, a top-bottom HMM is characterised by different parameters, the choice of which has so far been based on subjective intuition. This paper presents a set of experimental results in which various HMM parameterisations are analysed. >",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.",
"Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms. Are the algorithms very different? Is access to good big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private public big small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testings.",
"Recent face recognition experiments on a major benchmark (LFW [15]) show stunning performance–a number of algorithms achieve near to perfect score, surpassing human recognition rates. In this paper, we advocate evaluations at the million scale (LFW includes only 13K photos of 5K people). To this end, we have assembled the MegaFace dataset and created the first MegaFace challenge. Our dataset includes One Million photos that capture more than 690K different individuals. The challenge evaluates performance of algorithms with increasing numbers of \"distractors\" (going from 10 to 1M) in the gallery set. We present both identification and verification performance, evaluate performance with respect to pose and a persons age, and compare as a function of training data size (#photos and #people). We report results of state of the art and baseline algorithms. The MegaFace dataset, baseline code, and evaluation scripts, are all publicly released for further experimentations1.",
"The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.",
"Rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets. While important for early progress, a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery. The implication of this strategy is restricted variations in face pose and other confounding factors. This paper introduces the IARPA Janus Benchmark A (IJB-A), a publicly available media in the wild dataset containing 500 subjects with manually localized face images. Key features of the IJB-A dataset are: (i) full pose variation, (ii) joint use for face recognition and face detection benchmarking, (iii) a mix of images and videos, (iv) wider geographic variation of subjects, (v) protocols supporting both open-set identification (1∶N search) and verification (1∶1 comparison), (vi) an optional protocol that allows modeling of gallery subjects, and (vii) ground truth eye and nose locations. The dataset has been developed using 1,501,267 million crowd sourced annotations. Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.",
"In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.",
"Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild.",
"Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database."
]
}
|
1811.08965
|
2901139370
|
Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: this https URL.
|
Despite a great stride in FR on the HR web images, little attention has been paid to native LR face images. We found that state-of-the-art deep FR models trained on HR constrained face images do not generalise well to natively unconstrained LR face images (Table ), but only generalise much better to synthetic LR data (Table ). In this study, a newly created TinyFace benchmark provides for the first time a large scale native LRFR test for validating current deep learning FR models. TinyFace images were captured from real-world web social-media data. This complements the QMUL-SurvFace benchmark that is characterised by poor quality surveillance facial imagery captured from real-life security cameras deployed at open public spaces @cite_32 .
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2798814955"
],
"abstract": [
"Face recognition (FR) is one of the most extensively investigated problems in computer vision. Significant progress in FR has been made due to the recent introduction of the larger scale FR challenges, particularly with constrained social media web images, e.g. high-resolution photos of celebrity faces taken by professional photo-journalists. However, the more challenging FR in unconstrained and low-resolution surveillance images remains largely under-studied. To facilitate more studies on developing FR models that are effective and robust for low-resolution surveillance facial images, we introduce a new Surveillance Face Recognition Challenge, which we call the QMUL-SurvFace benchmark. This new benchmark is the largest and more importantly the only true surveillance FR benchmark to our best knowledge, where low-resolution images are not synthesised by artificial down-sampling of native high-resolution images. This challenge contains 463,507 face images of 15,573 distinct identities captured in real-world uncooperative surveillance scenes over wide space and time. As a consequence, it presents an extremely challenging FR benchmark. We benchmark the FR performance on this challenge using five representative deep learning face recognition models, in comparison to existing benchmarks. We show that the current state of the arts are still far from being satisfactory to tackle the under-investigated surveillance FR problem in practical forensic scenarios. Face recognition is generally more difficult in an open-set setting which is typical for surveillance scenarios, owing to a large number of non-target people (distractors) appearing open spaced scenes. This is evidently so that on the new Surveillance FR Challenge, the top-performing CentreFace deep learning FR model on the MegaFace benchmark can now only achieve 13.2 success rate (at Rank-20) at a 10 false alarm rate."
]
}
|
1811.08965
|
2901139370
|
Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: this https URL.
|
Existing LRFR methods can be summarised into two approaches: (1) Image super-resolution @cite_15 @cite_41 @cite_42 @cite_10 @cite_2 , and (2) resolution-invariant learning @cite_30 @cite_27 @cite_5 @cite_17 @cite_36 @cite_31 . The first approach exploits two model optimisation criteria in model formulation: Pixel-level visual fidelity and face identity discrimination @cite_41 @cite_10 @cite_2 @cite_38 . The second approach instead aims to learn resolution-invariant features @cite_30 @cite_5 @cite_36 or learning a cross-resolution structure transformation @cite_17 @cite_31 @cite_47 @cite_13 . All the existing LRFR methods share a number of limitations: (a) Considering only small gallery search pools (small scale) and or artificially down-sampled LR face images; (b) Mostly relying on either hand-crafted features or without end-to-end model optimisation in deep learning; (c) Assuming the availability of labelled LR HR image pairs for model training, which is unavailable in practice with native LR face imagery.
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_31",
"@cite_47",
"@cite_13",
"@cite_41",
"@cite_36",
"@cite_42",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2167260591",
"1563139347",
"2109002610",
"1998594584",
"2071399526",
"2126653386",
"1964246740",
"2114380981",
"2082714615",
"2054515210",
"2112724657",
"1988974846",
"2963102887",
""
],
"abstract": [
"In this paper, recognition of blurred faces using the recently introduced Local Phase Quantization (LPQ) operator is proposed. LPQ is based on quantizing the Fourier transform phase in local neighborhoods. The phase can be shown to be a blur invariant property under certain commonly fulfilled conditions. In face image analysis, histograms of LPQ labels computed within local regions are used as a face descriptor similarly to the widely used Local Binary Pattern (LBP) methodology for face image description. The experimental results on CMU PIE and FRGC 1.0.4 datasets show that the LPQ descriptor is highly tolerant to blur but still very descriptive outperforming LBP both with blurred and sharp images.",
"In video surveillance, the faces of interest are often of small size. Image resolution is an important factor affecting face recognition by human and computer. In this paper, we study the face recognition performance using different image resolutions. For automatic face recognition, a low resolution bound is found through experiments. We use an eigentransformation based hallucination method to improve the image resolution. The hallucinated face images are not only much helpful for recognition by human, but also make the automatic recognition procedure easier, since they emphasize the face difference by adding some high frequency details.",
"Recognition of low resolution face images is a challenging problem in many practical face recognition systems. Methods have been proposed in the face recognition literature for the problem when the probe is of low resolution, and a high resolution gallery is available for recognition. These methods modify the probe image such that the resultant image provides better discrimination. We formulate the problem differently by leveraging the information available in the high resolution gallery image and propose a generative approach for classifying the probe image. An important feature of our algorithm is that it can handle resolution changes along with illumination variations. The effectiveness of the proposed method is demonstrated using standard datasets and a challenging outdoor face dataset. It is shown that our method is efficient and can perform significantly better than many competitive low resolution face recognition algorithms.",
"Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Face recognition based on traditional super-resolution (SR) methods usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called coupled kernel embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multimodal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into a uniform optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the LR and HR spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.",
"While existing face recognition systems based on local features are robust to issues such as misalignment, they can exhibit accuracy degradation when comparing images of differing resolutions. This is common in surveillance environments where a gallery of high resolution mugshots is compared to low resolution CCTV probe images, or where the size of a given image is not a reliable indicator of the underlying resolution (e.g. poor optics). To alleviate this degradation, we propose a compensation framework which dynamically chooses the most appropriate face recognition system for a given pair of image resolutions. This framework applies a novel resolution detection method which does not rely on the size of the input images, but instead exploits the sensitivity of local features to resolution using a probabilistic multi-region histogram approach. Experiments on a resolution-modified version of the \"Labeled Faces in the Wild\" dataset show that the proposed resolution detector frontend obtains a 99 average accuracy in selecting the most appropriate face recognition system, resulting in higher overall face discrimination accuracy (across several resolutions) compared to the individual baseline face recognition systems.",
"Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.",
"Face recognition from low-resolution images is a common yet challenging case in real applications. Since the high-frequency information is lost in low-resolution images, it is necessary to explore robust information in the low frequency domain. In this paper, we propose an effective local frequency descriptor (LFD) for low resolution face recognition, by building upon the ideas behind local phase quantization (LPQ) and exploring both blur-invariant magnitude and phase information in the low frequency domain. The proposed descriptor is more descriptive than LPQ with more comprehensive information. In addition, a statistical uniform pattern definition method is introduced to improve the efficiency of the proposed descriptor. Experimental results on FERET and a real video database show that LFD is effective and robust for low-resolution face recognition.",
"Face recognition degrades when faces are of very low resolution since many details about the difference between one person and another can only be captured in images of sufficient resolution. In this work, we propose a new procedure for recognition of low-resolution faces, when there is a high-resolution training set available. Most previous super-resolution approaches are aimed at reconstruction, with recognition only as an after-thought. In contrast, in the proposed method, face features, as they would be extracted for a face recognition algorithm (e.g., eigenfaces, Fisher-faces, etc.), are included in a super-resolution method as prior information. This approach simultaneously provides measures of fit of the super-resolution result, from both reconstruction and recognition perspectives. This is different from the conventional paradigms of matching in a low-resolution domain, or, alternatively, applying a super-resolution algorithm to a low-resolution face and then classifying the super-resolution result. We show, for example, that recognition of faces of as low as 6 times 6 pixel size is considerably improved compared to matching using a super-resolution reconstruction followed by classification, and to matching with a low-resolution training set.",
"Face recognition performance degrades considerably when the input images are of poor resolution as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for the recognition of low resolution images using multidimensional scaling. From a resolution point of view, the scenario yielding the best performance is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method embeds the low resolution images in an Euclidean space such that the distances between them in the transformed space approximates the best distances had both the images been of high resolution. The mapping is learned from high resolution training images and their corresponding low resolution images using iterative majorization algorithm. Extensive evaluation of the proposed approach on different datasets like PIE and FRGC with resolution as low as 7 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach significantly improves the matching performance as compared to performing standard matching in the low-resolution domain. Performance comparison with different super-resolution techniques which obtains higher-resolution images prior to recognition further signifies the effectiveness of our approach.",
"This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.",
"In many current face-recognition (FR) applications, such as video surveillance security and content annotation in a Web environment, low-resolution faces are commonly encountered and negatively impact on reliable recognition performance. In particular, the recognition accuracy of current intensity-based FR systems can significantly drop off if the resolution of facial images is smaller than a certain level (e.g., less than 20 times 20 pixels). To cope with low-resolution faces, we demonstrate that facial color cue can significantly improve recognition performance compared with intensity-based features. The contribution of this paper is twofold. First, a new metric called ldquovariation ratio gainrdquo (VRG) is proposed to prove theoretically the significance of color effect on low-resolution faces within well-known subspace FR frameworks; VRG quantitatively characterizes how color features affect the recognition performance with respect to changes in face resolution. Second, we conduct extensive performance evaluation studies to show the effectiveness of color on low-resolution faces. In particular, more than 3000 color facial images of 341 subjects, which are collected from three standard face databases, are used to perform the comparative studies of color effect on face resolutions to be possibly confronted in real-world FR systems. The effectiveness of color on low-resolution faces has successfully been tested on three representative subspace FR methods, including the eigenfaces, the fisherfaces, and the Bayesian. Experimental results show that color features decrease the recognition error rate by at least an order of magnitude over intensity-driven features when low-resolution faces (25 times 25 pixels or less) are applied to three FR methods.",
"While researchers strive to improve automatic face recognition performance, the relationship between image resolution and face recognition performance has not received much attention. This relationship is examined systematically and a framework is developed such that results from super-resolution techniques can be compared. Three super-resolution techniques are compared with the Eigenface and Elastic Bunch Graph Matching face recognition engines. Parameter ranges over which these techniques provide better recognition performance than interpolated images is determined.",
"Visual recognition research often assumes a sufficient resolution of the region of interest (ROI). That is usually violated in practice, inspiring us to explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI in a VLRR problem can be smaller than 16 16 pixels, and is challenging to be recognized even by human experts. We attempt to solve the VLRR problem using deep learning methods. Taking advantage of techniques primarily in super resolution, domain adaptation and robust regression, we formulate a dedicated deep learning method and demonstrate how these techniques are incorporated step by step. Any extra complexity, when introduced, is fully justified by both analysis and simulation results. The resulting Robust Partially Coupled Networks achieves feature enhancement and recognition simultaneously. It allows for both the flexibility to combat the LR-HR domain mismatch, and the robustness to outliers. Finally, the effectiveness of the proposed models is evaluated on three different VLRR tasks, including face identification, digit recognition and font recognition, all of which obtain very impressive performances.",
""
]
}
|
1811.08965
|
2901139370
|
Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: this https URL.
|
In terms of LRFR deployment, two typical settings exist. One is LR-to-HR which matches LR probe faces against HR gallery images such as passport photos @cite_27 @cite_31 @cite_11 @cite_47 . The other is LR-to-LR where both probe and gallery are LR facial images @cite_38 @cite_41 @cite_15 @cite_2 @cite_10 . Generally, LR-to-LR is a less stringent deployment scenario. This is because, real-world imagery data often contain a very large number of joe public'' without HR gallery images enrolled in the FR system. Besides, the two settings share the same challenge of how to synthesise discriminative facial appearance features missing in the original LR input data -- one of the key challenges involved in solving the LRFR problem. The introduced TinyFace benchmark adopts the more general LR-to-LR setting.
|
{
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_15",
"@cite_41",
"@cite_27",
"@cite_2",
"@cite_47",
"@cite_10",
"@cite_11"
],
"mid": [
"1563139347",
"2109002610",
"1988974846",
"2126653386",
"2082714615",
"2054515210",
"1998594584",
"2963102887",
"2055444136"
],
"abstract": [
"In video surveillance, the faces of interest are often of small size. Image resolution is an important factor affecting face recognition by human and computer. In this paper, we study the face recognition performance using different image resolutions. For automatic face recognition, a low resolution bound is found through experiments. We use an eigentransformation based hallucination method to improve the image resolution. The hallucinated face images are not only much helpful for recognition by human, but also make the automatic recognition procedure easier, since they emphasize the face difference by adding some high frequency details.",
"Recognition of low resolution face images is a challenging problem in many practical face recognition systems. Methods have been proposed in the face recognition literature for the problem when the probe is of low resolution, and a high resolution gallery is available for recognition. These methods modify the probe image such that the resultant image provides better discrimination. We formulate the problem differently by leveraging the information available in the high resolution gallery image and propose a generative approach for classifying the probe image. An important feature of our algorithm is that it can handle resolution changes along with illumination variations. The effectiveness of the proposed method is demonstrated using standard datasets and a challenging outdoor face dataset. It is shown that our method is efficient and can perform significantly better than many competitive low resolution face recognition algorithms.",
"While researchers strive to improve automatic face recognition performance, the relationship between image resolution and face recognition performance has not received much attention. This relationship is examined systematically and a framework is developed such that results from super-resolution techniques can be compared. Three super-resolution techniques are compared with the Eigenface and Elastic Bunch Graph Matching face recognition engines. Parameter ranges over which these techniques provide better recognition performance than interpolated images is determined.",
"Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.",
"Face recognition performance degrades considerably when the input images are of poor resolution as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for the recognition of low resolution images using multidimensional scaling. From a resolution point of view, the scenario yielding the best performance is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method embeds the low resolution images in an Euclidean space such that the distances between them in the transformed space approximates the best distances had both the images been of high resolution. The mapping is learned from high resolution training images and their corresponding low resolution images using iterative majorization algorithm. Extensive evaluation of the proposed approach on different datasets like PIE and FRGC with resolution as low as 7 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach significantly improves the matching performance as compared to performing standard matching in the low-resolution domain. Performance comparison with different super-resolution techniques which obtains higher-resolution images prior to recognition further signifies the effectiveness of our approach.",
"This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.",
"Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Face recognition based on traditional super-resolution (SR) methods usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called coupled kernel embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multimodal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into a uniform optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the LR and HR spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.",
"Visual recognition research often assumes a sufficient resolution of the region of interest (ROI). That is usually violated in practice, inspiring us to explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI in a VLRR problem can be smaller than 16 16 pixels, and is challenging to be recognized even by human experts. We attempt to solve the VLRR problem using deep learning methods. Taking advantage of techniques primarily in super resolution, domain adaptation and robust regression, we formulate a dedicated deep learning method and demonstrate how these techniques are incorporated step by step. Any extra complexity, when introduced, is fully justified by both analysis and simulation results. The resulting Robust Partially Coupled Networks achieves feature enhancement and recognition simultaneously. It allows for both the flexibility to combat the LR-HR domain mismatch, and the robustness to outliers. Finally, the effectiveness of the proposed models is evaluated on three different VLRR tasks, including face identification, digit recognition and font recognition, all of which obtain very impressive performances.",
"Face recognition performance degrades considerably when the input images are of Low Resolution (LR), as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for matching low-resolution probe images with higher resolution gallery images, which are often available during enrollment, using Multidimensional Scaling (MDS). The ideal scenario is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method simultaneously embeds the low-resolution probe images and the high-resolution gallery images in a common space such that the distance between them in the transformed space approximates the distance had both the images been of high resolution. The two mappings are learned simultaneously from high-resolution training images using an iterative majorization algorithm. Extensive evaluation of the proposed approach on the Multi-PIE data set with probe image resolution as low as 8 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach improves the matching performance significantly as compared to performing matching in the low-resolution domain or using super-resolution techniques to obtain a higher resolution test image prior to recognition. Experiments on low-resolution surveillance images from the Surveillance Cameras Face Database further highlight the effectiveness of the approach."
]
}
|
1811.08965
|
2901139370
|
Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: this https URL.
|
Besides, image super-resolution (SR) deep learning techniques @cite_1 @cite_29 @cite_28 have been significantly developed which may be beneficial for LRFR. At large, FR and SR studies advance independently. We discovered through our experiments that contemporary SR deep learning models bring about very marginal FR performance benefit on native LR unconstrained images, even after trained on large HR web face imagery. To address this problem, we design a novel deep neural network CSRI to improve the FR performance on unconstrained native LR face images.
|
{
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_1"
],
"mid": [
"2747898905",
"54257720",
"2242218935"
],
"abstract": [
"Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable."
]
}
|
1811.09026
|
2901467249
|
We study the effect of impairment on stochastic multi-armed bandits and develop new ways to mitigate it. Impairment effect is the phenomena where an agent only accrues reward for an action if they have played it at least a few times in the recent past. It is practically motivated by repetition and recency effects in domains such as advertising (here consumer behavior may require repeat actions by advertisers) and vocational training (here actions are complex skills that can only be mastered with repetition to get a payoff). Impairment can be naturally modelled as a temporal constraint on the strategy space, and we provide two novel algorithms that achieve sublinear regret, each working with different assumptions on the impairment effect. We introduce a new notion called bucketing in our algorithm design, and show how it can effectively address impairment as well as a broader class of temporal constraints. Our regret bounds explicitly capture the cost of impairment and show that it scales (sub-)linearly with the degree of impairment. Our work complements recent work on modeling delays and corruptions, and we provide experimental evidence supporting our claims.
|
@cite_1 study bandit problems under a different behavioral effect rooted in microeconomic theory, namely self-reinforcement. In their setting, the algorithm interacts with a sequence of incoming users, who develop preference to the arms that have been played in the past. This positive externality affects the performance of algorithms, where the best arm's performance may get overshadowed by that of a suboptimal arm due to reinforcement. In our case, the frequency of arm plays in the recent past influences the rewards accrued for that arm, which is a different type of externality.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2786485478"
],
"abstract": [
"Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit positive externalities . We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We provide an algorithm which achieves optimal regret and show that such optimal regret exhibits substantially different structure from that observed in the standard MAB setting."
]
}
|
1811.09026
|
2901467249
|
We study the effect of impairment on stochastic multi-armed bandits and develop new ways to mitigate it. Impairment effect is the phenomena where an agent only accrues reward for an action if they have played it at least a few times in the recent past. It is practically motivated by repetition and recency effects in domains such as advertising (here consumer behavior may require repeat actions by advertisers) and vocational training (here actions are complex skills that can only be mastered with repetition to get a payoff). Impairment can be naturally modelled as a temporal constraint on the strategy space, and we provide two novel algorithms that achieve sublinear regret, each working with different assumptions on the impairment effect. We introduce a new notion called bucketing in our algorithm design, and show how it can effectively address impairment as well as a broader class of temporal constraints. Our regret bounds explicitly capture the cost of impairment and show that it scales (sub-)linearly with the degree of impairment. Our work complements recent work on modeling delays and corruptions, and we provide experimental evidence supporting our claims.
|
@cite_18 consider the setting where the algorithm has access to previous plays of each arm and focus on warm-starting the exploration-exploitation process. This could be useful in our setting, where one can partition the initial set of arms into buckets such that impairment is mitigated, learn the local best arms in each bucket. Next, these best arms can be combined together (hierarchically agglomerate) and their historical plays can be used to warm-start. A rigorous analysis of this heuristic is a possible future work. In a similar vein of learning with previously collected observational data, @cite_20 focus on counterfactual risk minimization. While these works work with a one-time use of historical data to improve future payoffs, in our setting we work around the continual impact of historical actions on future payoffs. @cite_23 consider the problem of history-dependent arm selection against possibly adaptive adversaries, which is a much weaker setting compared to ours.
|
{
"cite_N": [
"@cite_18",
"@cite_23",
"@cite_20"
],
"mid": [
"2187360726",
"2033530627",
"2950382198"
],
"abstract": [
"In this paper we consider the stochastic multi-armed bandit problem. However, unlike in the conventional version of this problem, we do not assume that the algorithm starts from scratch. Many applications offer observations of (some of) the arms even before the algorithm starts. We propose three novel multi-armed bandit algorithms that can exploit this data. An upper bound on the regret is derived in each case. The results show that a logarithmic amount of historic data can reduce regret from logarithmic to constant. The eectiveness of the proposed algorithms are demonstrated on a large-scale malicious URL detection problem.",
"Abstract The effects of emotional feelings during advertisement exposure and the effects of attitude toward the advertisement (Aad) are considered in an experiment which used both familiar and unfamiliar brands. The findings illustrate that brand familiarity moderates the relationships between Aad and brand attitude after advertisement exposure. In addition, the research provides evidence that the direct-affect-transfer hypothesis may be an adequate explanation for the effects of emotional feelings and Aad on brand attitude in some situations.",
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. These constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method -- called Policy Optimizer for Exponential Models (POEM) -- for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. POEM is evaluated on several multi-label classification problems showing substantially improved robustness and generalization performance compared to the state-of-the-art."
]
}
|
1811.09026
|
2901467249
|
We study the effect of impairment on stochastic multi-armed bandits and develop new ways to mitigate it. Impairment effect is the phenomena where an agent only accrues reward for an action if they have played it at least a few times in the recent past. It is practically motivated by repetition and recency effects in domains such as advertising (here consumer behavior may require repeat actions by advertisers) and vocational training (here actions are complex skills that can only be mastered with repetition to get a payoff). Impairment can be naturally modelled as a temporal constraint on the strategy space, and we provide two novel algorithms that achieve sublinear regret, each working with different assumptions on the impairment effect. We introduce a new notion called bucketing in our algorithm design, and show how it can effectively address impairment as well as a broader class of temporal constraints. Our regret bounds explicitly capture the cost of impairment and show that it scales (sub-)linearly with the degree of impairment. Our work complements recent work on modeling delays and corruptions, and we provide experimental evidence supporting our claims.
|
The notion of bucketing incentivises exploration and exploitation within each phase (see ). And this is an important attribute when the environment and specifications are changing. For instance, @cite_21 introduce the sleeping bandits problem, where the set of available arms changes with time and the algorithm needs to balance exploration and exploitation when new actions are made available. Similar models have also been considered in . In , the authors assume piece-wise stationarity of the rewards distributions (i.e., the reward distributions abruptly change at certain breakpoints in the time horizon).Under this model, they propose algorithms that estimate mean rewards of arms using only recent history (e.g., using a sliding-window or by discounting). However, none of these overlap with or subsume the impairment effect considered in this work.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2217850561"
],
"abstract": [
"We study the online decision problem in which the set of available actions varies over time, also called the sleeping experts problem. We consider the setting in which the performance comparison is made with respect to the best ordering of actions in hindsight. In this article, both the payoff function and the availability of actions are adversarial. [2010] gave a computationally efficient no-regret algorithm in the setting in which payoffs are stochastic. [2009] gave an efficient no-regret algorithm in the setting in which action availability is stochastic. However, the question of whether there exists a computationally efficient no-regret algorithm in the adversarial setting was posed as an open problem by [2010]. We show that such an algorithm would imply an algorithm for PAC learning DNF, a long-standing important open problem. We also consider the setting in which the number of available actions is restricted and study its relation to agnostic-learning monotone disjunctions over examples with bounded Hamming weight."
]
}
|
1811.09083
|
2900926445
|
In hierarchical reinforcement learning a major challenge is determining appropriate low-level policies. We propose an unsupervised learning scheme, based on asymmetric self-play from (2018), that automatically learns a good representation of sub-goals in the environment and a low-level policy that can execute them. A high-level policy can then direct the lower one by generating a sequence of continuous sub-goal vectors. We evaluate our model using Mazebase and Mujoco environments, including the challenging AntGather task. Visualizations of the sub-goal embeddings reveal a logical decomposition of tasks within the environment. Quantitatively, our approach obtains compelling performance gains over non-hierarchical approaches.
|
Many recent works have considered options discovery via parameterized modules operating at different timescales, where an actor'' operates at a finer timescale than a manager'', which outputs a goal or target for the actor. For example, Vezhnevets al @cite_0 trains the actor and manager together end-to-end via reward from the environment.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2949267040"
],
"abstract": [
"We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels -- allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits -- in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment."
]
}
|
1811.09083
|
2900926445
|
In hierarchical reinforcement learning a major challenge is determining appropriate low-level policies. We propose an unsupervised learning scheme, based on asymmetric self-play from (2018), that automatically learns a good representation of sub-goals in the environment and a low-level policy that can execute them. A high-level policy can then direct the lower one by generating a sequence of continuous sub-goal vectors. We evaluate our model using Mazebase and Mujoco environments, including the challenging AntGather task. Visualizations of the sub-goal embeddings reveal a logical decomposition of tasks within the environment. Quantitatively, our approach obtains compelling performance gains over non-hierarchical approaches.
|
A line of work @cite_13 @cite_9 @cite_18 @cite_12 @cite_11 takes this approach in the context of intrinsic motivation. In Mohamed al @cite_13 a variational inference approach is used to make exploration via empowerment @cite_15 tractable. Continuing in this path @cite_1 @cite_18 @cite_12 @cite_11 use an actor parameterized by state and a latent vector in such a way that the latent vector is predictable from a final state or a sequence of states the actor visits, but otherwise, the actions have high entropy. After pre-training in this way, a manager'' can learn to issue commands via the latent vector. In Haarnoja al @cite_10 , a similar construction is used to train an agent end to end. Our work also uses this construction, but the unsupervised pre-training of the actor is done via asymmetric self-play as in @cite_2 .
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2796206912",
"",
"2556477470",
"2603088459",
"1591713425",
"2962730405",
"2785342287",
""
],
"abstract": [
"",
"We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer's policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.",
"",
"In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset. Alice will \"propose\" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward.",
"The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent's actuation channel. The concept applies to any sensorimotor apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment. Using two simple experiments we also demonstrate how empowerment influences sensor-actuator evolution",
"The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm — an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions.",
"We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.",
""
]
}
|
1811.09083
|
2900926445
|
In hierarchical reinforcement learning a major challenge is determining appropriate low-level policies. We propose an unsupervised learning scheme, based on asymmetric self-play from (2018), that automatically learns a good representation of sub-goals in the environment and a low-level policy that can execute them. A high-level policy can then direct the lower one by generating a sequence of continuous sub-goal vectors. We evaluate our model using Mazebase and Mujoco environments, including the challenging AntGather task. Visualizations of the sub-goal embeddings reveal a logical decomposition of tasks within the environment. Quantitatively, our approach obtains compelling performance gains over non-hierarchical approaches.
|
There is a large literature on goal discovery and intrinsic motivation, both independent of RL @cite_3 @cite_21 and framed in terms of RL @cite_16 . Recently, Pete al @cite_8 used a construction where the goal space is learned first by using an auto-encoder on states from the environment, and then using a goal discovery algorithm on top of the learned representation. In this work, we use an intrinsic motivation approach to learn both a low level actor and the representation of the state space. In future work, we intend to do as in @cite_8 and consider goal discovery at the level of the manager as well.
|
{
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_16"
],
"mid": [
"2963973554",
"2000514530",
"1863227302",
"2139612737"
],
"abstract": [
"Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.",
"Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.",
"A novel curious model-building control system is described which actively tries to provoke situations for which it learned to expect to learn something about the environment. Such a system has been implemented as a four-network system based on Watkins' Q-learning algorithm which can be used to maximize the expectation of the temporal derivative of the adaptive assumed reliability of future predictions. An experiment with an artificial nondeterministic environment demonstrates that the system can be superior to previous model-building control systems, which do not address the problem of modeling the reliability of the world model's predictions in uncertain environments and use ad-hoc methods (like random search) to train the world model. >",
"Psychologists call behavior intrinsically motivated when it is engaged in for its own sake rather than as a step toward solving a specific problem of clear practical value. But what we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated reinforcement learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy."
]
}
|
1811.09030
|
2900595477
|
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of @math on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.
|
Dropout @cite_3 is a data augmentation technique that disturbs and masks the original information of given data by dropping pixels. Pixel dropping functions as injection of noise into an image @cite_21 . It makes the CNN robust to noisy images and contributes to generalization rather than enriching the dataset.
|
{
"cite_N": [
"@cite_21",
"@cite_3"
],
"mid": [
"2070665556",
"1904365287"
],
"abstract": [
"Abstract We develop a technique to test the hypothesis that multilayered, feed-forward networks with few units on the first hidden layer generalize better than networks with many units in the first layer. Large networks are trained to perform a classification task and the redundant units are removed (“pruning”) to produce the smallest network capable of performing the task. A technique for inserting layers where pruning has introduced linear inseparability is also described. Two tests of ability to generalize are used—the ability to classify training inputs corrupted by noise and the ability to classify new patterns from each class. The hypothesis is found to be false for networks trained with noisy inputs. Pruning to the minimum number of units in the first layer produces networks which correctly classify the training set but generalize poorly compared with larger networks.",
"When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition."
]
}
|
1811.09030
|
2900595477
|
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of @math on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.
|
Cutout randomly masks a square region in an image at every training step @cite_33 . It is an extension of dropout, where masking of regions behaves like injected noise and makes CNNs robust to noisy images. In addition, cutout can mask the entire main part of an object in an image, such as the face of a cat. In this case, CNNs need to learn other parts that are usually ignored, such as the tail of the cat. This prevents deep CNNs from overfitting to features of the main part of an object. A similar method, random erasing, has been proposed @cite_23 . It also masks a certain area of an image but has clear differences; it randomly determines whether to mask a region as well as the size and aspect ratio of the masked region.
|
{
"cite_N": [
"@cite_33",
"@cite_23"
],
"mid": [
"2746314669",
"2747685395"
],
"abstract": [
"Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL",
"In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: this https URL"
]
}
|
1811.09030
|
2900595477
|
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of @math on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.
|
Mixup alpha-blends two images to construct a new training image @cite_1 . Mixup can train deep CNNs on convex combinations of pairs of training samples and their labels, and enables deep CNNs to favor a simple linear behavior in-between training samples. This behavior makes the prediction confidence transit linearly from one class to another class, thus providing smoother estimation and margin maximization. Alpha-blending not only increases the variety of training images but also works like adversarial perturbation @cite_36 . Thereby, mixup makes deep CNNs robust to adversarial examples and stabilizes the training of generative adversarial networks. In addition, it behaves similar to class label smoothing by mixing class labels with the ratio @math @cite_34 . We explain label smoothing in detail below.
|
{
"cite_N": [
"@cite_36",
"@cite_34",
"@cite_1"
],
"mid": [
"2963207607",
"2183341477",
"2963399829"
],
"abstract": [
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.",
"Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks."
]
}
|
1811.09030
|
2900595477
|
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of @math on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.
|
AutoAugment @cite_26 is a framework exploring the best hyperparameters of existing data augmentations using reinforcement learning @cite_39 . It achieved significant results on the CIFAR-10 classification and proved the importance of data augmentation for the learning of deep CNN.
|
{
"cite_N": [
"@cite_26",
"@cite_39"
],
"mid": [
"2804047946",
"2963374479"
],
"abstract": [
"Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5 which is 0.4 better than the previous record of 83.1 . On CIFAR-10, we achieve an error rate of 1.5 , which is 0.6 better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214."
]
}
|
1811.09030
|
2900595477
|
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage similar to label smoothing. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of @math on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet and an image-caption retrieval task using Microsoft COCO.
|
In classification tasks, class labels are often expressed as probabilities of @math and @math . Deep CNNs commonly employ the softmax function, which never predicts an exact probability of @math and @math . Thus, deep CNNs continue to learn increasingly larger weight parameters and make an unjustly high confidence. Label smoothing sets the class probabilities to intermediate values, such as @math and @math . It prevents the endless pursuit of hard @math and @math probabilities for the estimated classes and enables the weight parameters to converge to certain values without discouraging correct classification @cite_34 . Mixup mixes class labels of the blended images with the ratio @math and has a similar contribution to label smoothing @cite_1 .
|
{
"cite_N": [
"@cite_34",
"@cite_1"
],
"mid": [
"2183341477",
"2963399829"
],
"abstract": [
"Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.",
"Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
Much work has been done on visual question answering in recent years @cite_28 @cite_20 @cite_17 @cite_22 , developing several methods applied to a number of datasets @cite_44 @cite_43 @cite_6 @cite_24 @cite_41 @cite_4 . With some variations, most of this work shares the approach of handling the problem as a multi-class classification problem, selecting answers from the training set. A fusion of image features (based on Convolutional Neural Network) and question features (mostly based on Recurrent Neural Network) is used to predict the answer. This approach provides the ability to obtain successful results for a target dataset without the need to "understand" the question explicitly. However, the approach lacks desired human skills such as dealing with novel domains without question-answering training, or provide explanations and alternative suggestions when answers are not found.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_41",
"@cite_6",
"@cite_44",
"@cite_43",
"@cite_24",
"@cite_20",
"@cite_17"
],
"mid": [
"2438044634",
"2529436507",
"2745132836",
"2788643321",
"2561715562",
"2952228917",
"2952228917",
"2949474740",
"2756766706",
"2496096353"
],
"abstract": [
"Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts.",
"Abstract Visual Question Answering (VQA) is a recent problem in computer vision and natural language processing that has garnered a large amount of interest from the deep learning, computer vision, and natural language processing communities. In VQA, an algorithm needs to answer text-based questions about images. Since the release of the first VQA dataset in 2014, additional datasets have been released and many algorithms have been proposed. In this review, we critically examine the current state of VQA in terms of problem formulation, existing datasets, evaluation metrics, and algorithms. In particular, we discuss the limitations of current datasets with regard to their ability to properly train and assess VQA algorithms. We then exhaustively review existing algorithms for VQA. Finally, we discuss possible future directions for VQA and image understanding research.",
"This paper presents a state-of-the-art model for visual question answering (VQA), which won the first place in the 2017 VQA Challenge. VQA is a task of significant importance for research in artificial intelligence, given its multimodal nature, clear evaluation protocol, and potential real-world applications. The performance of deep neural networks for VQA is very dependent on choices of architectures and hyperparameters. To help further research in the area, we describe in detail our high-performing, though relatively simple model. Through a massive exploration of architectures and hyperparameters representing more than 3,000 GPU-hours, we identified tips and tricks that lead to its success, namely: sigmoid outputs, soft training targets, image features from bottom-up attention, gated tanh activations, output embeddings initialized using GloVe and Google Images, large mini-batches, and smart shuffling of training data. We provide a detailed analysis of their impact on performance to assist others in making an appropriate selection.",
"The study of algorithms to automatically answer visual questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings. We propose VizWiz, the first goal-oriented VQA dataset arising from a natural VQA setting. VizWiz consists of over 31,000 visual questions originating from blind people who each took a picture using a mobile phone and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. VizWiz differs from the many existing VQA datasets because (1) images are captured by blind photographers and so are often poor quality, (2) questions are spoken and so are more conversational, and (3) often visual questions cannot be answered. Evaluation of modern algorithms for answering visual questions and deciding if a visual question is answerable reveals that VizWiz is a challenging dataset. We introduce this dataset to encourage a larger community to develop more generalized algorithms that can assist blind people.",
"When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.",
"Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at this http URL as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.",
"Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at this http URL as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.",
"Visual Question Answering (VQA) presents a unique challenge as it requires the ability to understand and encode the multi-modal inputs - in terms of image processing and natural language processing. The algorithm further needs to learn how to perform reasoning over this multi-modal representation so it can answer the questions correctly. This paper presents a survey of different approaches proposed to solve the problem of Visual Question Answering. We also describe the current state of the art model in later part of paper. In particular, the paper describes the approaches taken by various algorithms to extract image features, text features and the way these are employed to predict answers. We also briefly discuss the experiments performed to evaluate the VQA models and report their performances on diverse datasets including newly released VQA2.0[8].",
"Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
As end-to-end training dominates current answering schemes, many works focused on improving the image-question fused features @cite_19 @cite_18 @cite_1 , various attention mechanisms for selecting important features @cite_42 @cite_11 @cite_5 @cite_38 @cite_36 and incorporating outputs of other visual tasks @cite_25 @cite_14 @cite_2 @cite_35 .
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_35",
"@cite_36",
"@cite_42",
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_25",
"@cite_11"
],
"mid": [
"2770883544",
"2616125804",
"2771951981",
"2522258376",
"2951590222",
"2171810632",
"2744822616",
"2963383024",
"2786686366",
"2796001012",
"",
"2340874616"
],
"abstract": [
"Recently, the Visual Question Answering (VQA) task has gained increasing attention in artificial intelligence. Existing VQA methods mainly adopt the visual attention mechanism to associate the input question with corresponding image regions for effective question answering. The free-form region based and the detection-based visual attention mechanisms are mostly investigated, with the former ones attending free-form image regions and the latter ones attending pre-specified detection-box regions. We argue that the two attention mechanisms are able to provide complementary information and should be effectively integrated to better solve the VQA problem. In this paper, we propose a novel deep neural network for VQA that integrates both attention mechanisms. Our proposed framework effectively fuses features from free-form image regions, detection boxes, and question representations via a multi-modal multiplicative feature embedding scheme to jointly attend question-related free-form image regions and detection boxes for more accurate question answering. The proposed method is extensively evaluated on two publicly available datasets, COCO-QA and VQA, and outperforms state-of-the-art approaches. Source code is available at this https URL",
"Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results.",
"A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -- Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.",
"This paper proposes to improve visual question answering (VQA) with structured representations of both scene contents and questions. A key challenge in VQA is to require joint reasoning over the visual and text domains. The predominant CNN LSTM-based approach to VQA is limited by monolithic vector representations that largely ignore structure in the scene and in the form of the question. CNN feature vectors cannot effectively capture situations as simple as multiple object instances, and LSTMs process questions as series of words, which does not reflect the true complexity of language structure. We instead propose to build graphs over the scene objects and over the question words, and we describe a deep neural network that exploits the structure in these representations. This shows significant benefit over the sequential processing of LSTMs. The overall efficacy of our approach is demonstrated by significant improvements over the state-of-the-art, from 71.2 to 74.4 in accuracy on the \"abstract scenes\" multiple-choice benchmark, and from 34.7 to 39.1 in accuracy over pairs of \"balanced\" scenes, i.e. images with fine-grained differences and opposite yes no answers to a same question.",
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.",
"This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multi-modal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multi-modal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a co-attention mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-the-art performance on the real-world VQA dataset. Code available at this https URL.",
"",
"Visual Question Answering (VQA) is a novel problem domain where multi-modal inputs must be processed in order to solve the task given in the form of a natural language. As the solutions inherently require to combine visual and natural language processing with abstract reasoning, the problem is considered as AI-complete. Recent advances indicate that using high-level, abstract facts extracted from the inputs might facilitate reasoning. Following that direction we decided to develop a solution combining state-of-the-art object detection and reasoning modules. The results, achieved on the well-balanced CLEVR dataset, confirm the promises and show significant, few percent improvements of accuracy on the complex \"counting\" task.",
"A key solution to visual question answering (VQA) exists in how to fuse visual and language features extracted from an input image and question. We show that an attention mechanism that enables dense, bi-directional interactions between the two modalities contributes to boost accuracy of prediction of answers. Specifically, we present a simple architecture that is fully symmetric between visual and language representations, in which each question word attends on image regions and each image region attends on question words. It can be stacked to form a hierarchy for multi-step interactions between an image-question pair. We show through experiments that the proposed architecture achieves a new state-of-the-art on VQA and VQA 2.0 despite its small size. We also present qualitative evaluation, demonstrating how the proposed attention mechanism can generate reasonable attention maps on images and questions, which leads to the correct answer prediction.",
"",
"Visual Question and Answering (VQA) problems are attracting increasing interest from multiple research disciplines. Solving VQA problems requires techniques from both computer vision for understanding the visual contents of a presented image or video, as well as the ones from natural language processing for understanding semantics of the question and generating the answers. Regarding visual content modeling, most of existing VQA methods adopt the strategy of extracting global features from the image or video, which inevitably fails in capturing fine-grained information such as spatial configuration of multiple objects. Extracting features from auto-generated regions -- as some region-based image recognition methods do -- cannot essentially address this problem and may introduce some overwhelming irrelevant features with the question. In this work, we propose a novel Focused Dynamic Attention (FDA) model to provide better aligned image content representation with proposed questions. Being aware of the key words in the question, FDA employs off-the-shelf object detector to identify important regions and fuse the information from the regions and global features via an LSTM unit. Such question-driven representations are then combined with question representation and fed into a reasoning unit for generating the answers. Extensive evaluation on a large-scale benchmark dataset, VQA, clearly demonstrate the superior performance of FDA over well-established baselines."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
Some methods provide reasoning by using "facts" extraction ( scene type) @cite_26 or image caption results @cite_3 @cite_29 @cite_12 . Other methods focused on integrating external prior knowledge, mostly by producing a query to a knowledge database using the question and the image @cite_4 . Extracted external knowledge was also fused with question and image representations @cite_30 @cite_23 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_29",
"@cite_3",
"@cite_23",
"@cite_12"
],
"mid": [
"2176212817",
"2584723080",
"2438044634",
"2963565420",
"2790777763",
"2775221064",
"2787553714"
],
"abstract": [
"We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases.",
"One of the most intriguing features of the Visual Question Answering (VQA) challenge is the unpredictability of the questions. Extracting the information required to answer them demands a variety of image operations from detection and counting, to segmentation and reconstruction. To train a method to perform even one of these operations accurately from image, question, answer tuples would be challenging, but to aim to achieve them all with a limited set of such training data seems ambitious at best. Our method thus learns how to exploit a set of external off-the-shelf algorithms to achieve its goal, an approach that has something in common with the Neural Turing Machine [10]. The core of our proposed method is a new co-attention model. In addition, the proposed approach generates human-readable reasons for its decision, and can still be trained end-to-end without ground truth reasons being given. We demonstrate the effectiveness on two publicly available datasets, Visual Genome and VQA, and show that it produces the state-of-the-art results in both cases.",
"Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts.",
"",
"Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.",
"Visual Question Answering (VQA) has attracted much attention since it offers insight into the relationships between the multi-modal analysis of images and natural language. Most of the current algorithms are incapable of answering open-domain questions that require to perform reasoning beyond the image contents. To address this issue, we propose a novel framework which endows the model capabilities in answering more complex questions by leveraging massive external knowledge with dynamic memory networks. Specifically, the questions along with the corresponding images trigger a process to retrieve the relevant information in external knowledge bases, which are embedded into a continuous vector space by preserving the entity-relation structures. Afterwards, we employ dynamic memory networks to attend to the large body of facts in the knowledge graph and images, and then perform reasoning over these facts to generate corresponding answers. Extensive experiments demonstrate that our model not only achieves the state-of-the-art performance in the visual question answering task, but can also answer open-domain questions effectively by leveraging the external knowledge.",
"Visual Question Answering (VQA) has attracted attention from both computer vision and natural language processing communities. Most existing approaches adopt the pipeline of representing an image via pre-trained CNNs, and then using the uninterpretable CNN features in conjunction with the question to predict the answer. Although such end-to-end models might report promising performance, they rarely provide any insight, apart from the answer, into the VQA process. In this work, we propose to break up the end-to-end VQA into two steps: explaining and reasoning, in an attempt towards a more explainable VQA by shedding light on the intermediate results between these two steps. To that end, we first extract attributes and generate descriptions as explanations for an image using pre-trained attribute detectors and image captioning models, respectively. Next, a reasoning module utilizes these explanations in place of the image to infer an answer to the question. The advantages of such a breakdown include: (1) the attributes and captions can reflect what the system extracts from the image, thus can provide some explanations for the predicted answer; (2) these intermediate results can help us identify the inabilities of both the image understanding part and the answer inference part when the predicted answer is wrong. We conduct extensive experiments on a popular VQA dataset and dissect all results according to several measurements of the explanation quality. Our system achieves comparable performance with the state-of-the-art, yet with added benefits of explainability and the inherent ability to further improve with higher quality explanations."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
A compositional approach that builds a dynamic network out of trained modules is proposed by the Neural module Network (NMN) works. The modules structure was originally based on the dependency parsing of the question @cite_39 @cite_32 . The following versions included supervised learning of the modules arrangement @cite_10 @cite_7 according to annotations of the answering programs, that are available for the CLEVR dataset @cite_6 . While the assignment of modules is according to a meaningful learned program, the modules are trained only as components of an answering network for a specific dataset. They do no function as independent visual estimators and hence could not be modified or replaced by exiting methods, and consequently modular addition of independent modules is not possible. As other methods, a large amount of question-answer examples is required for training the system, in addition to question-program training. The answers are selected by classification, with no means for providing explanations or proposing alternatives. Our approach, in contrast, allows flexible integration of additional visual capabilities ( novel object class), providing elaborated answers, propose alternatives, and use external knowledge. This is obtained without any question-answer examples.
|
{
"cite_N": [
"@cite_7",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_10"
],
"mid": [
"2963224792",
"2230472587",
"2561715562",
"2416885651",
"2613404084"
],
"abstract": [
"Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems. For example, to answer “is there an equal number of balls and boxes?” we can look for balls, look for boxes, count them, and compare the results. The recently proposed Neural Module Network (NMN) architecture [3, 2] implements this approach to question answering by parsing questions into linguistic substructures and assembling question-specific deep networks from smaller modules that each solve one subtask. However, existing NMN implementations rely on brittle off-the-shelf parsers, and are restricted to the module configurations proposed by these parsers rather than learning them from data. In this paper, we propose End-to-End Module Networks (N2NMNs), which learn to reason by directly predicting instance-specific network layouts without the aid of a parser. Our model learns to generate network structures (by imitating expert demonstrations) while simultaneously learning network parameters (using the downstream task loss). Experimental results on the new CLEVR dataset targeted at compositional question answering show that N2NMNs achieve an error reduction of nearly 50 relative to state-of-theart attentional approaches, while discovering interpretable network architectures specialized for each question.",
"We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.",
"When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.",
"Visual question answering is fundamentally compositional in nature---a question like \"where is the dog?\" shares substructure with questions like \"what color is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.",
"Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
In parallel to our work, a method that learns to generate a program and carry it out according to full scene analysis (object detection and properties classification) was proposed @cite_27 . This method uses questions-answers training to learn the programs. It performs full scene analysis which may become infeasible for data sets that are less restricted than CLEVR. In our method, the answering process is guided by the question and does not perform a full scene analysis in order to produce the answer.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2891021031"
],
"abstract": [
"We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our neural-symbolic visual question answering (NS-VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three unique advantages. First, executing programs on a symbolic space is more robust to long program traces; our model can solve complex reasoning tasks better, achieving an accuracy of 99.8 on the CLEVR dataset. Second, the model is more data- and memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
The framework that we follow @cite_37 splits the answering task into question-to-graph mapping, followed by a recursive answering procedure. Mapping to graph representation utilizes the START parser @cite_31 @cite_16 to obtain a representation, where the nodes represent objects and their required information ( properties and quantifiers), and edges represent relations between objects. The answering procedure utilizes several visual estimators for detecting objects and classifying properties and relations between them. External knowledge database @cite_33 was used to extract information on question concepts that relates them to recognizable classes (e.g. finding synonyms). In our work, we train novel question-to-graph mappers, and apply them to different sizes and types of vocabulary. We extend the graph representation scope to support additional types of questions, and train and generate new visual estimators. This results in a system that achieves state-of-the-art results on the CLEVR dataset @cite_6 with no question-answering training, and provides models that both perform well on CLEVR and can represent questions from different domains.
|
{
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_6",
"@cite_31",
"@cite_16"
],
"mid": [
"2898446106",
"13682356",
"2561715562",
"1519791307",
"164524185"
],
"abstract": [
"An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.",
"ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. Here we present the latest iteration, ConceptNet 5, with a focus on its fundamental design decisions and ways to interoperate with it.",
"When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.",
"This paper describes a natural language system, START. The system analyzes English text and automatically transforms it into an appropriate representation, the knowledge base , which incorporates the information found in the text. The user gains access to information stored in the knowledge base by querying it in English. The system analyzes the query and decides through a matching process what information in the knowledge base is relevant to the question. Then it retrieves this information and formulates its response in English.",
"This paper describes the START Information Server built at the MIT Artificial Intelligence Laboratory. Available on the World Wide Web since December 1993, the START Server provides users with access to multi-media information in response to questions formulated in English. Over the last 3 years, the START Server answered hundreds of thousands of questions from users all over the world. The START Server is built on two foundations: the sentence-level Natural Language processing capability provided by the START Natural Language system and the idea of natural language annotations for multi-media information segments. This paper starts with an overview of sentence-level processing in the START system and then explains how annotating information segments with collections of English sentences makes it possible to use the power of sentence-level natural language processing in the service of multi-media information access. The paper ends with a proposal to annotate the World Wide Web."
]
}
|
1811.08481
|
2900706690
|
Methods for teaching machines to answer visual questions have made significant progress in the last few years, but although demonstrating impressive results on particular datasets, these methods lack some important human capabilities, including integrating new visual classes and concepts in a modular manner, providing explanations for the answer and handling new domains without new examples. In this paper we present a system that achieves state-of-the-art results on the CLEVR dataset without any questions-answers training, utilizes real visual estimators and explains the answer. The system includes a question representation stage followed by an answering procedure, which invokes an extendable set of visual estimators. It can explain the answer, including its failures, and provide alternatives to negative answers. The scheme builds upon a framework proposed recently, with extensions allowing the system to deal with novel domains without relying on training examples.
|
Current methods fit models to particular datasets and exploit their inherent biases, which can lead to ignoring parts of the question and the image, and to failures on novel domains @cite_9 . In contrast to the modular approach we pursue, each modification or upgrade requires a full retraining.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2463267937"
],
"abstract": [
"Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70 . In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention and show the similarities and differences in the behavior of these models. We also analyze the winning entry of the VQA Challenge 2016. Our behavior analysis reveals that despite recent progress, today's VQA models are \"myopic\" (tend to fail on sufficiently novel instances), often \"jump to conclusions\" (converge on a predicted answer after 'listening' to just half the question), and are \"stubborn\" (do not change their answers across images)."
]
}
|
1811.08201
|
2901189993
|
The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight network for semantic segmentation on mobile devices. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop Context Guided Network (CGNet), which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, CGNet achieves 64.8 mean IoU on Cityscapes with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 @math 1024 high-resolution images. The source code for the complete system are publicly available.
|
Small semantic segmentation models require making a good trade-off on accuracy and model parameters or memory footprint. ENet @cite_43 proposes to discard the last stage of the model and shows that semantic segmentation is feasible on embedded devices. However, ICNet @cite_33 proposes a compressed-PSPNet-based image cascade network to speed up the semantic segmentation. More recent ESPNet @cite_13 introduces a fast and efficient convolutional network for semantic segmentation of high-resolution images under resource constraints. Most of them follow the design principles of image classification, which makes them have poor segmentation accuracy.
|
{
"cite_N": [
"@cite_43",
"@cite_13",
"@cite_33"
],
"mid": [
"2419448466",
"2902446117",
"2611259176"
],
"abstract": [
"The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.",
"In this paper we present a novel framework for geolocalizing Unmanned Aerial Vehicles (UAVs) using only their onboard camera. The framework exploits the abundance of satellite imagery, along with established computer vision and deep learning methods, to locate the UAV in a satellite imagery map. It utilizes the contextual information extracted from the scene to attain increased geolocalization accuracy and enable navigation without the use of a Global Positioning System (GPS), which is advantageous in GPS-denied environments and provides additional enhancement to existing GPS-based systems. The framework inputs two images at a time, one captured using a UAV-mounted downlooking camera, and the other synthetically generated from the satellite map based on the UAV location within the map. Local features are extracted and used to register both images, a process that is performed recurrently to relate UAV motion to its actual map position, hence performing preliminary localization. A semantic shape matching algorithm is subsequently applied to extract and match meaningful shape information from both images, and use this information to improve localization accuracy. The framework is evaluated on two different datasets representing different geographical regions. Obtained results demonstrate the viability of proposed method and that the utilization of visual information can offer a promising approach for unconstrained UAV navigation and enable the aerial platform to be self-aware of its surroundings thus opening up new application domains or enhancing existing ones.",
"We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff."
]
}
|
1811.08495
|
2900627348
|
Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.
|
Recently, the usage of CNN features has been explored to detect unusual activities in surveillance footage. @cite_0 , the authors describe image regions using AlexNet @cite_11 and, then, track variations in such description in order to detect anomalies. With this tracking, they are able to use a image based CNN to find both motion and appearance anomalies.
|
{
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2963111876",
"2788311693"
],
"abstract": [
"Most of the crowd abnormal event detection methods rely on complex hand-crafted features to represent the crowd motion and appearance. Convolutional Neural Networks (CNN) have shown to be a powerful instrument with excellent representational capacities, which can leverage the need for hand-crafted features. In this paper, we show that keeping track of the changes in the CNN feature across time can be used to effectively detect local anomalies. Specifically, we propose to measure local abnormality by combining semantic information (inherited from existing CNN models) with low-level optical-flow. One of the advantages of this method is that it can be used without the fine-tuning phase. The proposed method is validated on challenging abnormality detection datasets and the results show the superiority of our approach compared with the state-of-theart methods.",
"The presence of noise represent a relevant issue in image feature extraction and classification. In deep learning, representation is learned directly from the data and, therefore, the classification model is influenced by the quality of the input. However, the ability of deep convolutional neural networks to deal with images that have a different quality when compare to those used to train the network is still to be fully understood. In this paper, we evaluate the generalization of models learned by different networks using noisy images. Our results show that noise cause the classification problem to become harder. However, when image quality is prone to variations after deployment, it might be advantageous to employ models learned using noisy data."
]
}
|
1811.08495
|
2900627348
|
Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.
|
@cite_39 a pre-trained C3D model @cite_34 was employed for feature extraction. Such model was originally designed for action recognition in videos, thus it learns spatio-temporal representations, which is very useful to deal with both motion and appearance abnormalities. Nevertheless, in @cite_39 the pre-trained model is used in a classification setup, therefore it has access to instances of both normal and anomalous behaviour during training.
|
{
"cite_N": [
"@cite_34",
"@cite_39"
],
"mid": [
"2952633803",
"2783882151"
],
"abstract": [
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"Surveillance videos are able to capture a variety of realistic anomalies. In this paper, we propose to learn anomalies by exploiting both normal and anomalous videos. To avoid annotating the anomalous segments or clips in training videos, which is very time consuming, we propose to learn anomaly through the deep multiple instance ranking framework by leveraging weakly labeled training videos, i.e. the training labels (anomalous or normal) are at video-level instead of clip-level. In our approach, we consider normal and anomalous videos as bags and video segments as instances in multiple instance learning (MIL), and automatically learn a deep anomaly ranking model that predicts high anomaly scores for anomalous video segments. Furthermore, we introduce sparsity and temporal smoothness constraints in the ranking loss function to better localize anomaly during training. We also introduce a new large-scale first of its kind dataset of 128 hours of videos. It consists of 1900 long and untrimmed real-world surveillance videos, with 13 realistic anomalies such as fighting, road accident, burglary, robbery, etc. as well as normal activities. This dataset can be used for two tasks. First, general anomaly detection considering all anomalies in one group and all normal activities in another group. Second, for recognizing each of 13 anomalous activities. Our experimental results show that our MIL method for anomaly detection achieves significant improvement on anomaly detection performance as compared to the state-of-the-art approaches. We provide the results of several recent deep learning baselines on anomalous activity recognition. The low recognition performance of these baselines reveals that our dataset is very challenging and opens more opportunities for future work."
]
}
|
1811.08264
|
2962202279
|
Human-Object Interaction (HOI) Detection is an important problem to understand how humans interact with objects. In this paper, we explore Interactiveness Knowledge which indicates whether human and object interact with each other or not. We found that interactiveness knowledge can be learned across HOI datasets, regardless of HOI category settings. Our core idea is to exploit an Interactiveness Network to learn the general interactiveness knowledge from multiple HOI datasets and perform Non-Interaction Suppression before HOI classification in inference. On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results. We extensively evaluate the proposed method on HICO-DET and V-COCO datasets. Our framework outperforms state-of-the-art HOI detection results by a great margin, verifying its efficacy and flexibility. Code is available at this https URL
|
Visual Relationship Detection. Visual relationship detection @cite_31 @cite_22 @cite_13 @cite_0 aims to detect the objects and classify their relationships simultaneously. In @cite_22 , Lu al proposed a relationship dataset VRD and an approach combined with language priors. Predicates within relationship triplet @math include actions, verbs, spatial and preposition vocabularies. Such vocabulary setting and severe long-tail issue within the dataset make this task quite difficult. Large-scale dataset Visual Genome @cite_13 is then proposed to promote studies in this problem. Recent works @cite_25 @cite_5 @cite_30 @cite_18 @cite_21 put attention on more effective visual feature extraction and try to exploit semantic information to refine the relationship detection.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_0",
"@cite_5",
"@cite_31",
"@cite_13",
"@cite_25"
],
"mid": [
"2883879068",
"2896603934",
"2479423890",
"2950531422",
"2423576022",
"2591644541",
"2049705550",
"2277195237",
"2579549467"
],
"abstract": [
"Detecting the relations among objects, such as \"cat on sofa\" and \"person ride horse\", is a crucial task in image understanding, and beneficial to bridging the semantic gap between images and natural language. Despite the remarkable progress of deep learning in detection and recognition of individual objects, it is still a challenging task to localize and recognize the relations between objects due to the complex combinatorial nature of various kinds of object relations. Inspired by the recent advances in one-shot learning, we propose a simple yet effective Semantics Induced Learner (SIL) model for solving this challenging task. Learning in one-shot manner can enable a detection model to adapt to a huge number of object relations with diverse appearance effectively and robustly. In addition, the SIL combines bottom-up and top-down attention mech- anisms, therefore enabling attention at the level of vision and semantics favorably. Within our proposed model, the bottom-up mechanism, which is based on Faster R-CNN, proposes objects regions, and the top-down mechanism selects and integrates visual features according to semantic information. Experiments demonstrate the effectiveness of our framework over other state-of-the-art methods on two large-scale data sets for object relation detection.",
"Visual relationship detection, which aims to predict a triplet with the detected objects, has attracted increasing attention in the scene understanding study. During tackling this problem, dealing with varying scales of the subjects and objects is of great importance, which has been less studied. To overcome this challenge, we propose a novel Vision Spatial Attention Network (VSA-Net), which employs a two-dimensional normal distribution attention scheme to effectively model small objects. In addition, we design a Subject-Object-layer (SO-layer) to distinguish between the subject and object to attain more precise results. To the best of our knowledge, VSA-Net is the first end-to-end attention mechanism based visual relationship detection model. Extensive experiments on the benchmark datasets (VRD and VG) show that, by using pure vision information, our VSA-Net achieves state-of-the-art performance for predicate detection, phrase detection, and relationship detection.",
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"Recognizing visual relationships among any pair of localized objects is pivotal for image understanding. Previous studies have shown remarkable progress in exploiting linguistic priors or external textual information to improve the performance. In this work, we investigate an orthogonal perspective based on feature interactions. We show that by encouraging deep message propagation and interactions between local object features and global predicate features, one can achieve compelling performance in recognizing complex relationships without using any linguistic priors. To this end, we present two new pooling cells to encourage feature interactions: (i) Contrastive ROI Pooling Cell, which has a unique deROI pooling that inversely pools local object features to the corresponding area of global predicate features. (ii) Pyramid ROI Pooling Cell, which broadcasts global predicate features to reinforce local object features.The two cells constitute a Spatiality-Context-Appearance Module (SCA-M), which can be further stacked consecutively to form our final Zoom-Net.We further shed light on how one could resolve ambiguous and noisy object and predicate annotations by Intra-Hierarchical trees (IH-tree). Extensive experiments conducted on Visual Genome dataset demonstrate the effectiveness of our feature-oriented approach compared to state-of-the-art methods (Acc@1 11.42 from 8.16 ) that depend on explicit modeling of linguistic interactions. We further show that SCA-M can be incorporated seamlessly into existing approaches to improve the performance by a large margin. The source code will be released on this https URL.",
"This paper introduces situation recognition, the problem of producing a concise summary of the situation an image depicts including: (1) the main activity (e.g., clipping), (2) the participating actors, objects, substances, and locations (e.g., man, shears, sheep, wool, and field) and most importantly (3) the roles these participants play in the activity (e.g., the man is clipping, the shears are his tool, the wool is being clipped from the sheep, and the clipping is in a field). We use FrameNet, a verb and role lexicon developed by linguists, to define a large space of possible situations and collect a large-scale dataset containing over 500 activities, 1,700 roles, 11,000 objects, 125,000 images, and 200,000 unique situations. We also introduce structured prediction baselines and show that, in activity-centric images, situation-driven prediction of objects and activities outperforms independent object and activity recognition.",
"Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.",
"Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset."
]
}
|
1811.08185
|
2954159299
|
Partial set cover problem and set multi-cover problem are two generalizations of the set cover problem. In this paper, we consider the partial set multi-cover problem which is a combination of them: given an element set E, a collection of sets ( S 2^E ), a total covering ratio q, each set (S S ) is associated with a cost (c_S ), each element (e E ) is associated with a covering requirement (r_e ), the goal is to find a minimum cost sub-collection ( S ' S ) to fully cover at least q|E| elements, where element e is fully covered if it belongs to at least (r_e ) sets of ( S ' ). Denote by (r_ = r_e:e E ) the maximum covering requirement. We present an ((O (r_ ^2n(1+ ( 1 )+ 1-q q )),1- ) )-bicriteria approximation algorithm, that is, the output of our algorithm has cost (O(r_ ^2 n(1+ ( 1 )+ 1-q q )) ) times of the optimal value while the number of fully covered elements is at least ((1- )q|E| ).
|
The set cover problem (SC) was one of the first 21 problems proved to be NP-hard in Karp's seminal paper @cite_11 . In fact, Feige @cite_19 proved that it cannot be approximated within factor @math unless @math , where @math is the number of elements. Dinur and Steurer @cite_9 proved the same lower bound under the assumption that @math . Khot and Regev @cite_1 showed that it cannot be approximated within factor @math for any constant @math assuming that unique games conjecture is true, where @math is the maximum number of sets containing a common element. On the other hand, greedy strategy achieves performance ratio @math @cite_13 @cite_4 @cite_22 , where @math is the maximum cardinality of a set and @math is the Harmonic number. And @math -approximation exists by either LP rounding method @cite_7 or local ratio method @cite_25 .
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2040924621",
"2163322487",
"2088188524",
"2044471216",
"2794992319",
"2143996311",
"2157054705",
"1572977974",
"1975442866"
],
"abstract": [
"Simple, polynomial-time, heuristic algorithms for finding approximate solutions to various polynomial complete optimization problems are analyzed with respect to their worst case behavior, measured by the ratio of the worst solution value that can be chosen by the algorithm to the optimal value. For certain problems, such as a simple form of the kanpsack problem and an optimization problem based on satisfiability testing, there are algorithms for which this ratio is bounded by a constant, independent of the problem size. For a number of set covering problems, simple algorithms yield worst case ratios which can grow with the log of the problem size. And for the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as n^@e, where n is the problem size and @e>0 depends on the algorithm.",
"It is shown that the ratio of optimal integral and fractional covers of a hypergraph does not exceed 1 + log d, where d is the maximum degree. This theorem may replace probabilistic methods in certain circumstances. Several applications are shown.",
"We propose a heuristic that delivers in @math steps a solution for the set covering problem the value of which does not exceed the maximum number of sets covering an element times the optimal value.",
"We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. We show that this relaxation is multiplicative with respect to parallel repetition and that it provides a good approximation to the game value. Based on this relaxation, we prove the following improved parallel repetition bound: For every projection game G with value at most ρ, the k-fold parallel repetition G⊗k has value at most [EQUATION] This statement implies a parallel repetition bound for projection games with low value ρ. Previously, it was not known whether parallel repetition decreases the value of such games. This result allows us to show that approximating set cover to within factor (1 --- e) ln n is NP-hard for every e > 0, strengthening Feige's quasi-NP-hardness and also building on previous work by Moshkovitz and Raz. In this framework, we also show improved bounds for few parallel repetitions of projection games, showing that Raz's counterexample to strong parallel repetition is tight even for a small number of repetitions. Finally, we also give a short proof for the NP-hardness of label cover(1, Δ) for all Δ > 0, starting from the basic PCP theorem.",
"",
"Given a collection F of subsets of S = 1,…, n , setcover is the problem of selecting as few as possiblesubsets from F such that their union covers S, , and maxk-cover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems areNP-hard. We prove that (1 - o (1)) ln n is a threshold below which setcover cannot be approximated efficiently, unless NP has slightlysuperpolynomial time algorithms. This closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o (1)) lnn), and provious results of Lund and Yanakakis, that showed hardness ofapproximation within a ratio of log 2 n 2s0.72 ln n . For max k -cover, we show an approximationthreshold of (1 - 1 e )(up tolow-order terms), under assumption that P≠NP .",
"Let A be a binary matrix of size m × n, let cT be a positive row vector of length n and let e be the column vector, all of whose m components are ones. The set-covering problem is to minimize cTx subject to Ax ≥ e and x binary. We compare the value of the objective function at a feasible solution found by a simple greedy heuristic to the true optimum. It turns out that the ratio between the two grows at most logarithmically in the largest column sum of A. When all the components of cT are the same, our result reduces to a theorem established previously by Johnson and Lovasz.",
"A local-ratio theorem for approximating the weighted vertex cover problem is presented. It consists of reducing the weights of vertices in certain subgraphs and has the effect of local-approximation. Putting together the Nemhauser-Trotter local optimization algorithm and the local-ratio theorem yields several new approximation techniques which improve known results from time complexity, simplicity and performance-ratio point of view. The main approximation algorithm guarantees a ratio of where K is the smallest integer s.t. † This is an improvement over the currently known ratios, especially for a “practical” number of vertices (e.g. for graphs which have less than 2400, 60000, 10 12 vertices the ratio is bounded by 1.75, 1.8, 1.9 respectively).",
"Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held. These experiences made me aware that seemingly simple discrete optimization problems could hold the seeds of combinatorial explosions. The work of Dantzig, Fulkerson, Hoffman, Edmonds, Lawler and other pioneers on network flows, matching and matroids acquainted me with the elegant and efficient algorithms that were sometimes possible. Jack Edmonds’ papers and a few key discussions with him drew my attention to the crucial distinction between polynomial-time and superpolynomial-time solvability. I was also influenced by Jack’s emphasis on min-max theorems as a tool for fast verification of optimal solutions, which foreshadowed Steve Cook’s definition of the complexity class NP. Another influence was George Dantzig’s suggestion that integer programming could serve as a universal format for combinatorial optimization problems."
]
}
|
1811.08185
|
2954159299
|
Partial set cover problem and set multi-cover problem are two generalizations of the set cover problem. In this paper, we consider the partial set multi-cover problem which is a combination of them: given an element set E, a collection of sets ( S 2^E ), a total covering ratio q, each set (S S ) is associated with a cost (c_S ), each element (e E ) is associated with a covering requirement (r_e ), the goal is to find a minimum cost sub-collection ( S ' S ) to fully cover at least q|E| elements, where element e is fully covered if it belongs to at least (r_e ) sets of ( S ' ). Denote by (r_ = r_e:e E ) the maximum covering requirement. We present an ((O (r_ ^2n(1+ ( 1 )+ 1-q q )),1- ) )-bicriteria approximation algorithm, that is, the output of our algorithm has cost (O(r_ ^2 n(1+ ( 1 )+ 1-q q )) ) times of the optimal value while the number of fully covered elements is at least ((1- )q|E| ).
|
In paper @cite_15 , Dobson first gave an @math -approximation algorithm for multi-set multi-cover problem (MSMC), where @math is the maximum size of a multi-set. Rajagopalan and Vazirani @cite_0 gave a greedy algorithm achieving the same performance ratio, using dual fitting analysis, which implies that the integrality gap of the classic linear program of MSMC is at most @math .
|
{
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"1988837529",
"2073127061"
],
"abstract": [
"We build on the classical greedy sequential set cover algorithm, in the spirit of the primal-dual schema, to obtain simple parallel approximation algorithms for the set cover problem and its generalizations. Our algorithms use randomization, and our randomized voting lemmas may be of independent interest. Fast parallel approximation algorithms were known before for set cover, though not for the generalizations considered in this paper.",
"We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx , Ax (ge) b , 0 (le) x (le) u , x integer, where the entries in A, b , and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics."
]
}
|
1811.08185
|
2954159299
|
Partial set cover problem and set multi-cover problem are two generalizations of the set cover problem. In this paper, we consider the partial set multi-cover problem which is a combination of them: given an element set E, a collection of sets ( S 2^E ), a total covering ratio q, each set (S S ) is associated with a cost (c_S ), each element (e E ) is associated with a covering requirement (r_e ), the goal is to find a minimum cost sub-collection ( S ' S ) to fully cover at least q|E| elements, where element e is fully covered if it belongs to at least (r_e ) sets of ( S ' ). Denote by (r_ = r_e:e E ) the maximum covering requirement. We present an ((O (r_ ^2n(1+ ( 1 )+ 1-q q )),1- ) )-bicriteria approximation algorithm, that is, the output of our algorithm has cost (O(r_ ^2 n(1+ ( 1 )+ 1-q q )) ) times of the optimal value while the number of fully covered elements is at least ((1- )q|E| ).
|
For the partial set cover problem, Kearns @cite_12 gave a greedy algorithm achieving performance ratio @math . By modifying the greedy algorithm a little, Slavik @cite_17 improved the performance ratio to @math , where @math is the percentage that elements are required to be covered. Gandhi @cite_23 proposed a primal-dual algorithm achieving performance ratio @math . Bar-Yuhuda @cite_5 studied a generalized version in which each element has a profit and the total profit of covered elements should exceed a threshold. Using local ratio method, he also obtained performance ratio @math . Konemann @cite_18 presented a Lagrangian relaxation framework and obtained performance ratio @math for the generalized partial set cover problem.
|
{
"cite_N": [
"@cite_18",
"@cite_23",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2175530190",
"2015706819",
"2070814595",
"1506934790",
"2155148197"
],
"abstract": [
"An instance of the generalized partial cover problem consists of a ground set U and a family of subsets @math . Each element e∈U is associated with a profit p(e), whereas each subset @math has a cost c(S). The objective is to find a minimum cost subcollection @math such that the combined profit of the elements covered by @math is at least P, a specified profit bound. In the prize-collecting version of this problem, there is no strict requirement to cover any element; however, if the subsets we pick leave an element e∈U uncovered, we incur a penalty of π(e). The goal is to identify a subcollection @math that minimizes the cost of @math plus the penalties of uncovered elements. Although problem-specific connections between the partial cover and the prize-collecting variants of a given covering problem have been explored and exploited, a more general connection remained open. The main contribution of this paper is to establish a formal relationship between these two variants. As a result, we present a unified framework for approximating problems that can be formulated or interpreted as special cases of generalized partial cover. We demonstrate the applicability of our method on a diverse collection of covering problems, for some of which we obtain the first non-trivial approximability results.",
"We study a generalization of covering problems called partial covering. Here we wish to cover only a desired number of elements, rather than covering all elements as in standard covering problems. For example, in k-partial set cover, we wish to choose a minimum number of sets to cover at least k elements. For k-partial set cover, if each element occurs in at most f sets, then we derive a primal-dual f-approximation algorithm (thus implying a 2-approximation for k-partial vertex cover) in polynomial time. Without making any assumption about the number of sets an element is in, for instances where each set has cardinality at most three, we obtain an approximation of 4 3. We also present better-than-2-approximation algorithms for k-partial vertex cover on bounded degree graphs, and for vertex cover on expanders of bounded average degree. We obtain a polynomial-time approximation scheme for k-partial vertex cover on planar graphs, and for covering k points in Rd by disks.",
"In this paper we consider the natural generalizations of two fundamental problems, the Set-Cover problem and the Min-Knapsack problem. We are given a hypergraph, each vertex of which has a nonnegative weight, and each edge of which has a nonnegative length. For a given threshold ??, our objective is to find a subset of the vertices with minimum total cost, such that at least a length of ?? of the edges is covered. This problem is called the partial set cover problem. We present an O(|V|2+|H|)-time, ?E-approximation algorithm for this problem, where ?E?2 is an upper bound on the edge cardinality of the hypergraph and |H| is the size of the hypergraph (i.e., the sum of all its edges cardinalities). The special case where ?E=2 is called the partial vertex cover problem. For this problem a 2-approximation was previously known, however, the time complexity of our solution, i.e., O(|V|2), is a dramatic improvement.We show that if the weights are homogeneous (i.e., proportional to the potential coverage of the sets) then any minimal cover is a good approximation. Now, using the local-ratio technique, it is sufficient to repeatedly subtract a homogeneous weight function from the given weight function.",
"This thesis is a study of the computational complexity of machine learning from examples in the distribution-free model introduced by L. G. Valiant (V84). In the distribution-free model, a learning algorithm receives positive and negative examples of an unknown target set (or concept) that is chosen from some known class of sets (or concept class). These examples are generated randomly according to a fixed but unknown probability distribution representing Nature, and the goal of the learning algorithm is to infer an hypothesis concept that closely approximates the target concept with respect to the unknown distribution. This thesis is concerned with proving theorems about learning in this formal mathematical model. We are interested in the phenomenon of efficient learning in the distribution-free model, in the standard polynomial-time sense. Our results include general tools for determining the polynomial-time learnability of a concept class, an extensive study of efficient learning when errors are present in the examples, and lower bounds on the number of examples required for learning in our model. A centerpiece of the thesis is a series of results demonstrating the computational difficulty of learning a number of well-studied concept classes. These results are obtained by reducing some apparently hard number-theoretic problems from cryptography to the learning problems. The hard-to-learn concept classes include the sets represented by Boolean formulae, deterministic finite automata and a simplified form of neural networks. We also give algorithms for learning powerful concept classes under the uniform distribution, and give equivalences between natural models of efficient learnability. This thesis also includes detailed definitions and motivation for the distribution-free model, a chapter discussing past research in this model and related models, and a short list of important open problems.",
"We prove that the classical bounds on the performance of the greedy algorithm for approximating MINIMUM COVER with costs are valid for PARTIAL COVER as well, thus lowering, by more than a factor of two, the previously known estimate. In order to do so, we introduce a new simple technique that might be useful for attacking other similar problems."
]
}
|
1811.08185
|
2954159299
|
Partial set cover problem and set multi-cover problem are two generalizations of the set cover problem. In this paper, we consider the partial set multi-cover problem which is a combination of them: given an element set E, a collection of sets ( S 2^E ), a total covering ratio q, each set (S S ) is associated with a cost (c_S ), each element (e E ) is associated with a covering requirement (r_e ), the goal is to find a minimum cost sub-collection ( S ' S ) to fully cover at least q|E| elements, where element e is fully covered if it belongs to at least (r_e ) sets of ( S ' ). Denote by (r_ = r_e:e E ) the maximum covering requirement. We present an ((O (r_ ^2n(1+ ( 1 )+ 1-q q )),1- ) )-bicriteria approximation algorithm, that is, the output of our algorithm has cost (O(r_ ^2 n(1+ ( 1 )+ 1-q q )) ) times of the optimal value while the number of fully covered elements is at least ((1- )q|E| ).
|
From the above related work, it can be seen that both PSC and SMC admit performance ratios matching those best ratios for the classic set cover problem. However, combining partial set cover with set multi-cover has enormously increased the difficulty of studies. Ran @cite_6 were the first to study approximation algorithms for PSMC, using greedy strategy and dual-fitting analysis. However, their ratio is meaningful only when the covering percentage @math is very close to @math . In paper @cite_20 , the authors presented a simple greedy algorithm achieving performance ratio @math . They also presented a local ratio algorithm, which reveals a what they called shock wave'' phenomenon: their performance ratio is @math for both PSC and SMC , however, when @math is smaller than 1 by a very small constant, the ratio jumps abruptly to @math . In our recent paper @cite_21 , we proved that PSMC cannot have a better than polynomial performance ratio by a reduction from the well-known densest @math -subgraph problem .
|
{
"cite_N": [
"@cite_20",
"@cite_21",
"@cite_6"
],
"mid": [
"2509865736",
"2901780026",
"2291861235"
],
"abstract": [
"In this paper, we study the minimum partial set multi-cover problem (PSMC). Given an element set E, a collection of subsets ( S 2^E ), a cost (c_S ) on each set (S S ), a covering requirement (r_e ) for each element (e E ), and an integer k, the PSMC problem is to find a sub-collection ( F S ) to fully cover at least k elements such that the cost of ( F ) is as small as possible, where element e is fully covered by ( F ) if it belongs to at least (r_e ) sets of ( F ). This paper presents an approximation algorithm using local ratio method achieving performance ratio ( k ( 1 f-r_ + r_ r_ ) , 1 + f r_ + 1 r_ - 1 r_ -1, 1 ), where ( ) is the size of a maximum set, f is the maximum number of sets containing a common element, ( ) is the minimum percentage of elements required to be fully covered during iterations of the algorithm, and (r_ ) and (r_ ) are the maximum and the minimum covering requirement, respectively. In particular, when (r_ ) is a constant, the first term can be omitted. Notice that our ratio coincides with the classic ratio f for both the set multi-cover problem (in which case (k=|E| )) and the partial set single-cover problem (in which case (r_ =1 )). However, when (k 1 ), the ratio might be as large as ( (n) ). This result shows an interesting “shock wave like” feature of approximating PSMC. The purpose of this paper is trying to arouse some interest in such a feature and attract more work on this challenging problem.",
"In a minimum partial set multi-cover problem (MinPSMC), given an element set E, a collection of subsets ( S 2^E ), a cost (w_S ) on each set (S S ), a covering requirement (r_e ) for each element (e E ), and an integer k, the goal is to find a sub-collection ( F S ) to fully cover at least k elements such that the cost of ( F ) is as small as possible, where element e is fully covered by ( F ) if it belongs to at least (r_e ) sets of ( F ). On the application side, the problem has its background in the seed selection problem in a social network. On the theoretical side, it is a natural combination of the minimum partial (single) set cover problem (MinPSC) and the minimum set multi-cover problem (MinSMC). Although both MinPSC and MinSMC admit good approximations whose performance ratios match those lower bounds for the classic set cover problem, previous studies show that theoretical study on MinPSMC is quite challenging. In this paper, we prove that MinPSMC cannot be approximated within factor (O(n^ 1 2( n)^c ) ) under the ETH assumption. A primal dual algorithm for MinPSMC is presented with a guaranteed performance ratio (O( n ) ) when (r_ ) and f are constants, where (r_ = _ e E r_e ) is the maximum covering requirement and f is the maximum frequency of elements (that is the maximum number of sets containing a common element). We also improve the ratio for a restricted version of MinPSMC which possesses a graph-type structure.",
"Influence problem is one of the central problems in the study of online social networks, the goal of which is to influence all nodes with the minimum number of seeds. However, in the real world, it might be too expensive to influence all nodes. In many cases, it is satisfactory to influence nodes only up to some percent p. In this paper, we study the minimum partial positive influence dominating set (MPPIDS) problem. In fact, we presented an approximation algorithm for a more general problem called minimum partial set multicover problem. As a consequence, the MPPIDS problem admits an approximation with performance ratio @math źH(Δ), where @math H(·) is the Harmonic number, @math ź=1 (1-(1-p)ź),źźΔ2 ź, and @math Δ,ź are the maximum degree and the minimum degree of the graph, respectively. For power-law graphs, we show that our algorithm has a constant performance ratio."
]
}
|
1811.08291
|
2964277583
|
Abstract We propose a setting for two-phase opinion dynamics in social networks, where a node’s final opinion in the first phase acts as its initial biased opinion in the second phase. In this setting, we study the problem of two camps aiming to maximize adoption of their respective opinions, by strategically investing on nodes in the two phases. A node’s initial opinion in the second phase naturally plays a key role in determining the final opinion of that node, and hence also of other nodes in the network due to its influence on them. However, more importantly, this bias also determines the effectiveness of a camp’s investment on that node in the second phase. In order to formalize this two-phase investment setting, we propose an extension of Friedkin–Johnsen model, and hence formulate the utility functions of the camps. We arrive at a decision parameter which can be interpreted as two-phase Katz centrality. There is a natural tradeoff while splitting the available budget between the two phases. A lower investment in the first phase results in worse initial biases in the network for the second phase. On the other hand, a higher investment in the first phase spares a lower available budget for the second phase, resulting in an inability to fully harness the influenced biases. We first analyze the non-competitive case where only one camp invests, for which we present a polynomial time algorithm for determining an optimal way to split the camp’s budget between the two phases. We then analyze the case of competing camps, where we show the existence of Nash equilibrium and that it can be computed in polynomial time under reasonable assumptions. We conclude our study with simulations on real-world network datasets, in order to quantify the effects of the initial biases and the weightage attributed by nodes to their initial biases, as well as that of a camp deviating from its equilibrium strategy. Our main conclusion is that, if nodes attribute high weightage to their initial biases, it is advantageous to have a high investment in the first phase, so as to effectively influence the biases to be harnessed in the second phase.
|
The topic of opinion dynamics has received significant attention in the social networks community. Xia, Wang, and Xuan @cite_27 give a multidisciplinary review of the field of opinion dynamics as a combination of the social processes and the analytical and computational tools. A line of work deals with opinion diffusion in social networks under popular models such as the independent cascade and linear threshold @cite_2 @cite_39 @cite_46 . Lorenz @cite_25 surveys several modeling frameworks concerning continuous opinion dynamics. Acemoglu and Ozdaglar @cite_42 review several other models of opinion dynamics, some noteworthy ones being DeGroot @cite_34 , Voter @cite_38 , Friedkin-Johnsen @cite_40 @cite_8 , bounded confidence @cite_9 , etc. In Friedkin-Johnsen model, each node updates its opinion using a weighted combination of its initial bias and its neighbors' opinions. In this paper, we generalize this model to multiple phases, while also incorporating the camps' investments.
|
{
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_9",
"@cite_42",
"@cite_39",
"@cite_27",
"@cite_40",
"@cite_2",
"@cite_46",
"@cite_34",
"@cite_25"
],
"mid": [
"1971842701",
"1972981640",
"37686529",
"2046058958",
"",
"2017997982",
"2027514326",
"2232943036",
"",
"1998692453",
"2159918425"
],
"abstract": [
"",
"In this article we derive implications about social positions from a formal theory of social influence. The formal theory describes how, in a group of actors with heterogeneous initial opinions, a network of interpersonal influences enters into the formation of actors' settled opinions. We derive the following conclusions about a special form of structural equivalence. If actors are structurally equivalent in the network of interpersonal influences, then any dissimilarity of their initial opinions is reduced by the social influence process. If the social positions of actors are identical, i.e. if they have identical initial opinions and are structurally equivalent in the influence network, then they have identical opinions at equilibrium. If actors are not structurally equivalent in the network of interpersonal influences, then the social influence process does not necessarily reduce dissimilarities of initial opinions. We extend our analysis to consider automorphic equivalence.",
"Consensus formation among n experts is modeled as a positive discrete dynamical system in n dimensions. The well–known linear but non–autonomous model is extended to a nonlinear one admitting also various kinds of averaging beside the weighted arithmetic mean. For this model a sufficient condition for reaching a consensus is presented. As a special case consensus formation under bounded confidence is analyzed.",
"We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and non-Bayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. non-Bayesian), the sources of information (e.g., observation vs. communication), and the structure of social networks in which individuals are situated on three key questions: (1) whether social learning will lead to consensus, i.e., to agreement among individuals starting with different views; (2) whether social learning will effectively aggregate dispersed information and thus weed out incorrect beliefs; (3) whether media sources, prominent agents, politicians and the state will be able to manipulate beliefs and spread misinformation in a society.",
"",
"As a key sub-field of social dynamics and sociophysics, opinion dynamics utilizes mathematical and physical models and the agent-based computational modeling tools, to investigate the spreading of opinions in a collection of human beings. This research field stems from various disciplines in social sciences, especially the social influence models developed in social psychology and sociology. A multidisciplinary review is given in this paper, attempting to keep track of the historical development of the field and to shed light on its future directions. In the review, the authors discuss the disciplinary origins of opinion dynamics, showing that the combination of the social processes, which are conventionally studied in social sciences, and the analytical and computational tools, which are developed in mathematics, physics and complex system studies, gives birth to the interdisciplinary field of opinion dynamics. The current state of the art of opinion dynamics is then overviewed, with the research progresses on the typical models like the voter model, the Sznajd model, the culture dissemination model, and the bounded confidence model being highlighted. Correspondingly, the future directions of this academic field are envisioned, with an advocation for closer synthesis of the related disciplines.",
"In this paper we describe an approach to the relationship between a network of interpersonal influences and the content of individuals’ opinions. Our work starts with the specification of social process rather than social equilibrium. Several models of social influence that have appeared in the literature are derived as special cases of the approach. Some implications for theories on social conflict and conformity also are developed in this paper.",
"Are all film stars linked to Kevin Bacon? Why do the stock markets rise and fall sharply on the strength of a vague rumour? How does gossip spread so quickly? Are we all related through six degrees of separation? There is a growing awareness of the complex networks that pervade modern society. We see them in the rapid growth of the internet, the ease of global communication, the swift spread of news and information, and in the way epidemics and financial crises develop with startling speed and intensity. This introductory book on the new science of networks takes an interdisciplinary approach, using economics, sociology, computing, information science and applied mathematics to address fundamental questions about the links that connect us, and the ways that our decisions can have consequences for others.",
"",
"Abstract Consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution for the parameter by pooling their individual opinions. The process leading to the consensus is explicitly described and the common distribution that is reached is explicitly determined. The model can also be applied to problems of reaching a consensus when the opinion of each member of the group is represented simply as a point estimate of the parameter rather than as a probability distribution.",
"Models of continuous opinion dynamics under bounded confidence have been presented independently by Krause and Hegselmann and by in 2000. They have raised a fair amount of attention in the communities of social simulation, sociophysics and complexity science. The researchers working on it come from disciplines such as physics, mathematics, computer science, social psychology and philosophy. In these models agents hold continuous opinions which they can gradually adjust if they hear the opinions of others. The idea of bounded confidence is that agents only interact if they are close in opinion to each other. Usually, the models are analyzed with agent-based simulations in a Monte Carlo style, but they can also be reformulated on the agent's density in the opinion space in a master equation style. The contribution of this survey is fourfold. First, it will present the agent-based and density-based modeling frameworks including the cases of multidimensional opinions and heterogeneous bounds o..."
]
}
|
1811.08291
|
2964277583
|
Abstract We propose a setting for two-phase opinion dynamics in social networks, where a node’s final opinion in the first phase acts as its initial biased opinion in the second phase. In this setting, we study the problem of two camps aiming to maximize adoption of their respective opinions, by strategically investing on nodes in the two phases. A node’s initial opinion in the second phase naturally plays a key role in determining the final opinion of that node, and hence also of other nodes in the network due to its influence on them. However, more importantly, this bias also determines the effectiveness of a camp’s investment on that node in the second phase. In order to formalize this two-phase investment setting, we propose an extension of Friedkin–Johnsen model, and hence formulate the utility functions of the camps. We arrive at a decision parameter which can be interpreted as two-phase Katz centrality. There is a natural tradeoff while splitting the available budget between the two phases. A lower investment in the first phase results in worse initial biases in the network for the second phase. On the other hand, a higher investment in the first phase spares a lower available budget for the second phase, resulting in an inability to fully harness the influenced biases. We first analyze the non-competitive case where only one camp invests, for which we present a polynomial time algorithm for determining an optimal way to split the camp’s budget between the two phases. We then analyze the case of competing camps, where we show the existence of Nash equilibrium and that it can be computed in polynomial time under reasonable assumptions. We conclude our study with simulations on real-world network datasets, in order to quantify the effects of the initial biases and the weightage attributed by nodes to their initial biases, as well as that of a camp deviating from its equilibrium strategy. Our main conclusion is that, if nodes attribute high weightage to their initial biases, it is advantageous to have a high investment in the first phase, so as to effectively influence the biases to be harnessed in the second phase.
|
Specific to analytically tractable models such as DeGroot, there have been studies in the competitive setting to identify influential nodes and the amounts to be invested on them @cite_30 @cite_37 @cite_14 . @cite_7 study a broader framework with respect to one such model (Friedkin-Johnsen model), while considering a number of practically motivated settings such as those accounting for diminishing marginal returns on investment, adversarial behavior of the competitor, uncertainty regarding system parameters, and bound on the combined investment by the camps on each node. Our work extends these studies to two phases, by identifying influential nodes in the two phases and how much they should be invested on in each phase.
|
{
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_7"
],
"mid": [
"419599686",
"1599913669",
"2265908166",
"2788068499"
],
"abstract": [
"We consider a model of influence with a set of non-strategic agents and two strategic agents. The non-strategic agents have initial opinions and are linked through a simply connected network. They update their opinions as in the DeGroot model. The two strategic agents have fixed opinions, 1 and 0 respectively, and are characterized by the magnitude of the impact they can exert on non-strategic agents. Each strategic agent forms a link with one non-strategic agent in order to alter the average opinion that eventually emerges in the network. This procedure defines a zero-sum game whose players are the two strategic agents and whose strategy set is the set of non-strategic agents. We focus on the existence and the characterization of equilibria in pure strategy in this setting. Simple examples show that the existence of a pure strategy equilibrium does depend on the structure of the network. The characterization of equilibrium we obtain emphasizes on the one hand the influenceability of target agents and on the other hand their centrality whose natural measure in our context defines a new concept, related to betweenness centrality, that we call intermediacy. We also show that in the case where the two strategic agents have the same impact, symmetric equilibria emerge as natural solutions whereas in the case where the impacts are uneven, the strategic players generally have differentiated equilibrium targets, the high-impacts agent focusing on centrality and the low-impact agent on influenceability.",
"There are many situations in which a customer's proclivity to buy the product of any firm depends not only on the classical attributes of the product such as its price and quality, but also on who else is buying the same product. We model these situations as games in which firms compete for customers located in a “social network”. Nash Equilibrium (NE) in pure strategies exist and are unique. Indeed there are closed-form formulae for the NE in terms of the exogenous parameters of the model, which enables us to compute NE in polynomial time. An important structural feature of NE is that, if there are no a priori biases between customers and firms, then there is a cut-off level above which high cost firms are blockaded at an NE, while the rest compete uniformly throughout the network. We finally explore the relation between the connectivity of a customer and the money firms spend on him. This relation becomes particularly transparent when externalities are dominant: NE can be characterized in terms of the invariant measures on the recurrent classes of the Markov chain underlying the social network.",
"Recent advances in information technology have allowed firms to gather vast amounts of data regarding consumers’ preferences and the structure and intensity of their social interactions. This paper examines a game-theoretic model of competition between firms that can target their marketing budgets to individuals embedded in a social network. We provide a sharp characterization of the optimal targeted advertising strategies and highlight their dependence on the underlying social network structure. Furthermore, we provide conditions under which it is optimal for the firms to asymmetrically target a subset of the individuals and establish a lower bound on the ratio of their payoffs in these asymmetric equilibria. Finally, we find that at equilibrium firms invest inefficiently high in targeted advertising and the extent of the inefficiency is increasing in the centralities of the agents they target. Taken together, these findings shed light on the effect of the network structure on the outcome of marketing competition between the firms.",
"We study the problem of optimally investing in nodes of a social network in a competitive setting, wherein two camps aim to drive the average opinion of the population in their own favor. Using a well-established model of opinion dynamics, we formulate the problem as a zero-sum game with its players being the two camps. We derive optimal investment strategies for both camps, and show that a random investment strategy is optimal when the underlying network follows a popular class of weight distributions. We study a broad framework, where we consider various well-motivated settings of the problem, namely, when the influence of a camp on a node is a concave function of its investment on that node, when a camp aims at maximizing competitor's investment or deviation from its desired investment, and when one of the camps has uncertain information about the values of the model parameters. We also study a Stackelberg variant of this game under common coupled constraints on the combined investments by the camps and derive their equilibrium strategies, and hence quantify the first-mover advantage. For a quantitative and illustrative study, we conduct simulations on real-world datasets and provide results and insights."
]
}
|
1811.08291
|
2964277583
|
Abstract We propose a setting for two-phase opinion dynamics in social networks, where a node’s final opinion in the first phase acts as its initial biased opinion in the second phase. In this setting, we study the problem of two camps aiming to maximize adoption of their respective opinions, by strategically investing on nodes in the two phases. A node’s initial opinion in the second phase naturally plays a key role in determining the final opinion of that node, and hence also of other nodes in the network due to its influence on them. However, more importantly, this bias also determines the effectiveness of a camp’s investment on that node in the second phase. In order to formalize this two-phase investment setting, we propose an extension of Friedkin–Johnsen model, and hence formulate the utility functions of the camps. We arrive at a decision parameter which can be interpreted as two-phase Katz centrality. There is a natural tradeoff while splitting the available budget between the two phases. A lower investment in the first phase results in worse initial biases in the network for the second phase. On the other hand, a higher investment in the first phase spares a lower available budget for the second phase, resulting in an inability to fully harness the influenced biases. We first analyze the non-competitive case where only one camp invests, for which we present a polynomial time algorithm for determining an optimal way to split the camp’s budget between the two phases. We then analyze the case of competing camps, where we show the existence of Nash equilibrium and that it can be computed in polynomial time under reasonable assumptions. We conclude our study with simulations on real-world network datasets, in order to quantify the effects of the initial biases and the weightage attributed by nodes to their initial biases, as well as that of a camp deviating from its equilibrium strategy. Our main conclusion is that, if nodes attribute high weightage to their initial biases, it is advantageous to have a high investment in the first phase, so as to effectively influence the biases to be harnessed in the second phase.
|
To the best of our knowledge, there has not been an analytical study on a rich model such as Friedkin-Johnsen, for opinion dynamics in two phases (not even for single camp). The most relevant to this study is our earlier work @cite_0 where, however, a camp's influence on a node is assumed to be independent of the node's bias. In this paper, we consider a more realistic setting by relaxing this assumption. An interesting outcome of relaxing this assumption is that, while the camps' optimal strategies turn out to be mutually independent in @cite_0 , these strategies get coupled in our setting. In other words, the setting in @cite_0 results in a competition , while the one in this paper results in a game .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2963780059"
],
"abstract": [
"We study the problem of two competing camps aiming to maximize the adoption of their respective opinions, by optimally investing in nodes of a social network in multiple phases. The final opinion of a node in a phase acts as its biased opinion in the following phase. Using an extension of Friedkin-Johnsen model, we formulate the camps' utility functions, which we show to involve what can be interpreted as multiphase Katz centrality. We hence present optimal investment strategies of the camps, and the loss incurred if myopic strategy is employed. Simulations affirm that nodes attributing higher weightage to bias necessitate higher investment in initial phase. The extended version of this paper analyzes a setting where a camp's influence on a node depends on the node's bias; we show existence and polynomial time computability of Nash equilibrium."
]
}
|
1811.08164
|
2901642255
|
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
|
* Automatic US shadow detection Acoustic shadows have a significant impact on US image quality, and thus a serious effect on robustness and accuracy of image processing methods. In clinical literature, US artifacts including shadows have been well studied and reviewed @cite_17 @cite_24 @cite_33 . However, the shadow problem is not well covered in automated US image analysis literature. Anatomically estimating acoustic shadow has rarely been the focus within the medical image analysis community as it is a challenging task.
|
{
"cite_N": [
"@cite_24",
"@cite_33",
"@cite_17"
],
"mid": [
"2109478667",
"2103464444",
"1753280301"
],
"abstract": [
"Lung ultrasound can be routinely performed at the bedside by intensive care unit physicians and may provide accurate information on lung status with diagnostic and therapeutic relevance. This article reviews the performance of bedside lung ultrasound for diagnosing pleural effusion, pneumothorax, alveolar-interstitial syndrome, lung consolidation, pulmonary abscess and lung recruitment derecruitment in critically ill patients with acute lung injury.",
"AbstractUltrasound image segmentation deals with delineating the boundaries of structures, as a step towards semi-automated or fully automated measurement of dimensions or for characterizing tissue regions. Ultrasound tissue characterization (UTC) is driven by knowledge of the physics of ultrasound and its interactions with biological tissue, and has traditionally used signal modelling and analysis to characterize and differentiate between healthy and diseased tissue. Thus, both aim to enhance the capabilities of ultrasound as a quantitative tool in clinical medicine, and the two end goals can be the same, namely to characterize the health of tissue. This article reviews both research topics, and finds that the two fields are becoming more tightly coupled, even though there are key challenges to overcome in each area, influenced by factors such as more open software-based ultrasound system architectures, increased computational power, and advances in imaging transducer design.",
""
]
}
|
1811.08164
|
2901642255
|
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
|
Identifying shadow regions in US images has been utilized as a preprocessing step for extracting valid image content and improving image analysis accuracy in some applications. @cite_21 has identified shadow regions by thresholding the accumulated intensity along each scanning beam line. Afterwards, these shadow regions have been masked out from US images for an US to magnetic resonance (MR) hepatic image registration. Instead of excluding shadow regions, @cite_2 focused on accurate attenuation estimation, and aimed to use attenuation properties for determination of the anatomical properties which can help diagnose diseases. @cite_2 proposed a hybrid attenuation estimation method that combines spectral difference and spectral shift methods to reduce the influence of local spectral noise and backscatter variations in Radio Frequency (RF) US data. To detect shadow regions in B-Mode scans directly and automatically, @cite_0 have introduced the probe's geometric information and statistically modelled the US B-Mode cone. Compared with previous statistical shadow detection methods such as @cite_21 , this method can automatically estimate the probe's geometry as well as other hyperparameters, and has shown improvements in 3D reconstruction, registration and tracking. However, this method can only detect a subset of deep' acoustic shadows because of the probe geometry-dependant sampling strategy.
|
{
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_2"
],
"mid": [
"2087829477",
"2171157356",
"1989042134"
],
"abstract": [
"In ultrasound images, acoustic shadows appear as regions of low signal intensity linked to boundaries with very high acoustic impedance differences. Acoustic shadows can be viewed either as informative features to detect lesions or calcifications, or as damageable artifacts for image processing tasks such as segmentation, registration or 3D reconstruction. In both cases, the detection of these acoustic shadows is useful. This paper proposes a new method to detect these shadows that combines a geometrical approach to estimate the B-scans shape, followed by a statistical test based on a dedicated modeling of ultrasound image statistics. Results demonstrate that the combined geometrical-statistical technique is more robust and yields better results than the previous statistical technique. Integration of regularization over time further improves robustness. Application of the procedure results in (1) improved 3D reconstructions with fewer artifacts, and (2) reduced mean registration error of tracked intraoperative brain ultrasound images.",
"Abstract We present a method to register a preoperative MR volume to a sparse set of intraoperative ultrasound slices. Our aim is to allow the transfer of information from preoperative modalities to intraoperative ultrasound images to aid needle placement during thermal ablation of liver metastases. The spatial relationship between ultrasound slices is obtained by tracking the probe using a Polaris optical tracking system. Images are acquired at maximum exhalation and we assume the validity of the rigid body transformation. An initial registration is carried out by picking a single corresponding point in both modalities. Our strategy is to interpret both sets of images in an automated pre-processing step to produce evidence or probabilities of corresponding structure as a pixel or voxel map. The registration algorithm converts the intensity values of the MR and ultrasound images into vessel probability values. The registration is then carried out between the vessel probability images. Results are compared to a “bronze standard” registration which is calculated using a manual point line picking algorithm and verified using visual inspection. Results show that our starting estimate is within a root mean square target registration error (calculated over the whole liver) of 15.4 mm to the “bronze standard” and this is improved to 3.6 mm after running the intensity-based algorithm.",
"Abtract—Attenuation estimation methods for medical ultrasound are important because attenuation properties of soft tissue can be used to distinguish between benign and malignant tumors and to detect diffuse disease. The classical spectral shift method and the spectral difference method are the most commonly used methods for the estimation of the attenuation; however, they both have specific limitations. Classical spectral shift approaches for estimating ultrasonic attenuation are more sensitive to local spectral noise artifacts and have difficulty in compensating for diffraction effects because of beam focusing. Spectral difference approaches, on the other hand, fail to accurately estimate attenuation coefficient values at tissue boundaries that also possess variations in the backscatter. In this paper, we propose a hybrid attenuation estimation method that combines the advantages of the spectral difference and spectral shift methods to overcome their specific limitations. The proposed hybrid method initially uses the spectral difference approach to reduce the impact of system-dependent parameters including diffraction effects. The normalized power spectrum that includes variations because of backscatter changes is then filtered using a Gaussian filter centered at the transmit center frequency of the system. A spectral shift method, namely the spectral cross-correlation algorithm is then used to compute spectral shifts from these filtered power spectra to estimate the attenuation coefficient. Ultrasound simulation results demonstrate that the estimation accuracy of the hybrid method is better than the centroid downshift method (spectral shift method), in uniformly attenuating regions. In addition, this method is also stable at boundaries with variations in the backscatter when compared with the reference phantom method (spectral difference method). Experimental results using tissue-mimicking phantom also illustrate that the hybrid method is more robust and provides accurate attenuation estimates in both uniformly attenuating regions and across boundaries with backscatter variations. The proposed hybrid method preserves the advantages of both the spectral shift and spectral difference approaches while eliminating the disadvantages associated with each of these methods, thereby improving the accuracy and robustness of the attenuation estimation. (E-mail: hyungsukkim@wisc.edu) © 2008 World Federation for Ultrasound in Medicine & Biology."
]
}
|
1811.08164
|
2901642255
|
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
|
Some studies have utilized acoustic shadow detection as additional information in their pipeline for other US image processing tasks. @cite_11 combined acoustic shadow detection for the characterization of dense calcium tissue in intravascular US virtual histology, and @cite_8 automatically and simultaneously segment vertebrae, spinous process and acoustic shadow in US images for a better assessment of scoliosis progression. In these applications, acoustic shadow detection is task-specific, and is mainly based on heuristic image intensity features as well as special anatomical constraints.
|
{
"cite_N": [
"@cite_8",
"@cite_11"
],
"mid": [
"2305452864",
"2205506937"
],
"abstract": [
"Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84 , 92 and 91 were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis progression. Graphical abstractDisplay Omitted HighlightsTexture descriptors and state-of-the art features allowed accurate segmentation.The features were optimized for vertebral region discrimination in ultrasound.Regularization accounts for geometrical properties of vertebral ultrasound images.",
"We enhance intravascular ultrasound virtual histology (VH) tissue characterization by fully automatic quantification of the acoustic shadow behind calcified plaque. VH is unable to characterize atherosclerosis located behind calcifications. In this study, the quantified acoustic shadows are considered calcified to approximate the real dense calcium (DC) plaque volume. In total, 57 patients with 108 coronary lesions were included. A novel post-processing step is applied on the VH images to quantify the acoustic shadow and enhance the VH results. The VH and enhanced VH results are compared to quantitative computed tomography angiography (QTA) plaque characterization as reference standard. The correlation of the plaque types between enhanced VH and QTA differs significantly from the correlation with unenhanced VH. For DC, the correlation improved from 0.733 to 0.818. Instead of an underestimation of DC in VH with a bias of 8.5 mm3, there was a smaller overestimation of 1.1 mm3 in the enhanced VH. Although tissue characterization within the acoustic shadow in VH is difficult, the novel algorithm improved the DC tissue characterization. This algorithm contributes to accurate assessment of calcium on VH and could be applied in clinical studies."
]
}
|
1811.08164
|
2901642255
|
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
|
* Weakly supervised image segmentation Weakly supervised automatic detection of class differences has been explored in other imaging domains (e.g. MRI). For example, @cite_22 proposed to use generative adversarial network (GAN) to highlight class differences only from image-level labels (Alzheimer's disease or healthy). We used a similar idea in @cite_26 and initialized potential shadow areas based on saliency maps @cite_14 from a classification task between images containing shadow and those without shadow. Inspired by recent weakly supervised deep learning methods that have drastically improved semantic image analysis @cite_9 @cite_5 @cite_28 and to overcome the limitations of @cite_26 , we develop a confidence estimation algorithm that takes advantages of both types of weak labels, including image-level labels and a sparse set of coarse pixel-wise labels. Our method is able to predict dense, shadow-focused confidence maps directly from input US images in real-time.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_9",
"@cite_5"
],
"mid": [
"2123045220",
"2891299533",
"2963635991",
"2396622801",
"",
"2950328304"
],
"abstract": [
"Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.",
"Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fet al screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.",
"Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects.",
"In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut[1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fet al magnetic resonance dataset and obtain encouraging results in terms of accuracy.",
"",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them"
]
}
|
1811.08321
|
2901674361
|
Convolutional neural networks (CNN) have achieved impressive performance on the wide variety of tasks (classification, detection, etc.) across multiple domains at the cost of high computational and memory requirements. Thus, leveraging CNNs for real-time applications necessitates model compression approaches that not only reduce the total number of parameters but reduce the overall computation as well. In this work, we present a stability-based approach for filter-level pruning of CNNs. We evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, and Faster RCNN) and datasets and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X, significantly outperforming other state-of-the-art filter pruning methods.
|
In the connection pruning, they introduce sparsity in the model by removing unimportant connections (parameters). There are many heuristics proposed to identify the unimportant parameters. Earliest works include Optimal Brain Damage @cite_30 and Optimal Brain Surgeon @cite_3 where they used Taylor expansion to identify the parameters significance. Later @cite_32 proposed an iterative method where absolute values of weights below a certain threshold are pruned, and the model is fine-tuned to recover the drop in accuracy. This type of pruning is called unstructured pruning as the pruned connections have no specific pattern. This approach is useful when most of the parameters lie in the FC (fully connected) layers. Often, specialized libraries and hardware are required to leverage the induced sparsity to save computation and memory requirements. However, this does not typically result in any significant reduction in CNN computations (FLOPS based SpeedUp) as most of the calculations are performed in CONV (convolutional) layers. For example, in VGG-16, 90 Other works include @cite_29 where they propose hashing technique to randomly group the connection weights into a single bucket and then fine-tune the model to recover from the accuracy loss.
|
{
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_32",
"@cite_3"
],
"mid": [
"2114766824",
"2952432176",
"2964299589",
"2125389748"
],
"abstract": [
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization."
]
}
|
1811.08321
|
2901674361
|
Convolutional neural networks (CNN) have achieved impressive performance on the wide variety of tasks (classification, detection, etc.) across multiple domains at the cost of high computational and memory requirements. Thus, leveraging CNNs for real-time applications necessitates model compression approaches that not only reduce the total number of parameters but reduce the overall computation as well. In this work, we present a stability-based approach for filter-level pruning of CNNs. We evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, and Faster RCNN) and datasets and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X, significantly outperforming other state-of-the-art filter pruning methods.
|
In our work, we focus on filter level pruning. Most of the works in this category evaluate the importance of an entire filter and prune them based on some criteria followed by re-training to recover the accuracy drop. In the work @cite_18 , they calculate the filter importance by measuring the change in accuracy after pruning the filter from the model. @cite_8 used @math norm to calculate the filter importance. @cite_27 calculate the filter importance on a subset of the training data using activation of the output feature map. These approaches are largely based on hand-crafted heuristics. Parallel to these works, ranking filters based on data-driven approaches are proposed. @cite_26 performed the channel level pruning by attaching a learnable scaling factor to each channel and enforcing @math norm on those parameters during the training. Recently, group sparsity is also being explored for filter level pruning. @cite_6 @cite_25 @cite_4 @cite_14 explored the filter pruning using group lasso. However, at times these methods require specialized hardware for efficient SpeedUp during inference.
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_27",
"@cite_25"
],
"mid": [
"2619444510",
"",
"2513419314",
"2520760693",
"",
"2951134251",
"2495425901",
""
],
"abstract": [
"Convolutional neural networks (CNNs) have state-of-the-art performance on many problems in machine vision. However, networks with superior performance often have millions of weights so that it is difficult or impossible to use CNNs on computationally limited devices or to humanly interpret them. A myriad of CNN compression approaches have been proposed and they involve pruning and compressing the weights and filters. In this article, we introduce a greedy structural compression scheme that prunes filters in a trained CNN. We define a filter importance index equal to the classification accuracy reduction (CAR) of the network after pruning that filter (similarly defined as RAR for regression). We then iteratively prune filters based on the CAR index. This algorithm achieves substantially higher classification accuracy in AlexNet compared to other structural compression schemes that prune filters. Pruning half of the filters in the first or second layer of AlexNet, our CAR algorithm achieves 26 and 20 higher classification accuracies respectively, compared to the best benchmark filter pruning scheme. Our CAR algorithm, combined with further weight pruning and compressing, reduces the size of first or second convolutional layer in AlexNet by a factor of 42, while achieving close to original classification accuracy through retraining (or fine-tuning) network. Finally, we demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities. In fact, out of top 20 CAR-pruned filters in AlexNet, 17 of them in the first layer and 14 of them in the second layer are color-selective filters as opposed to shape-selective filters. To our knowledge, this is the first reported result on the connection between compression and interpretability of CNNs.",
"",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL",
"To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only (30 , ) of the original neurons without any degradation of the top-1 classification accuracy.",
"",
"Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of structured sparsity during learning. More precisely, we use a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80 while retaining or even improving the network accuracy.",
"State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
""
]
}
|
1811.08321
|
2901674361
|
Convolutional neural networks (CNN) have achieved impressive performance on the wide variety of tasks (classification, detection, etc.) across multiple domains at the cost of high computational and memory requirements. Thus, leveraging CNNs for real-time applications necessitates model compression approaches that not only reduce the total number of parameters but reduce the overall computation as well. In this work, we present a stability-based approach for filter-level pruning of CNNs. We evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, and Faster RCNN) and datasets and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X, significantly outperforming other state-of-the-art filter pruning methods.
|
At times, these quantization methods require specialized library hardware support to reach desired compression rates. Some of the other notable works using different approaches from quantization include @cite_24 @cite_7 and @cite_21 where they used the low-rank approximation to decompose tensors and reduce the computations. Our method performs filter pruning using data-driven filter rankings. To the best of our knowledge, our work is a primary effort to relate filter importance to its stability and does not require any special hardware software such as cuSPARSE (NVIDIA CUDA Sparse Matrix library).
|
{
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_7"
],
"mid": [
"2167215970",
"2950967261",
"1902041153"
],
"abstract": [
"We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.",
"The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9 . Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7 more accurate."
]
}
|
1811.08362
|
2900782452
|
We discuss the robustness and generalization ability in the realm of action recognition, showing that the mainstream neural networks are not robust to disordered frames and diverse video environments. There are two possible reasons: First, existing models lack an appropriate method to overcome the inevitable decision discrepancy between multiple streams with different input modalities. Second, by doing cross-dataset experiments, we find that the optical flow features are hard to be transferred, which affects the generalization ability of the two-stream neural networks. For robust action recognition, we present the Reversed Two-Stream Networks (Rev2Net) which has three properties: (1) It could learn more transferable, robust video features by reversing the multi-modality inputs as training supervisions. It outperforms all other compared models in challenging frames shuffle experiments and cross-dataset experiments. (2) It is highlighted by an adaptive, collaborative multi-task learning approach that is applied between decoders to penalize their disagreement in the deep feature space. We name it the decoding discrepancy penalty (DDP). (3) As the decoder streams will be removed at test time, Rev2Net makes recognition decisions purely based on raw video frames. Rev2Net achieves the best results in the cross-dataset settings and competitive results on classic action recognition tasks: 94.6 for UCF-101, 71.1 for HMDB-51 and 73.3 for Kinetics. It performs even better than most methods who take extra inputs beyond raw RGB frames.
|
Ever since the great impact of CNNs upon image classification, many researchers have been trying out reusing CNNs for video action recognition in recent years. According to an early neurological study @cite_35 , the motion cue is the primary reason for humans to recognize a range of actions. To capture the motion patterns in the spatiotemporal data, researchers have explored various deep network architectures, including practicing different connectivity methods in 2D CNNs @cite_29 , using non-local networks @cite_28 , proposing 3D CNNs by extending convolutional filters into the time domain @cite_14 @cite_9 @cite_3 , using temporally recurrent layers to aggregate features across longer video inputs @cite_19 @cite_0 , as well as training a two-stream ensemble network with a second stream of CNNs fed with pre-computed optical flow frames @cite_24 . Among all these architectures, the two-stream networks with optical flow inputs and the 3D or pseudo'' 3D CNNs have been most widely explored.
|
{
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_24"
],
"mid": [
"2099634219",
"1586730761",
"",
"2308045930",
"1983364832",
"2952633803",
"",
"2951183276",
"2952186347"
],
"abstract": [
"This paper reports the first phase of a research program on visual perception of motion patterns characteristic of living organisms in locomotion. Such motion patterns in animals and men are termed here as biological motion. They are characterized by a far higher degree of complexity than the patterns of simple mechanical motions usually studied in our laboratories. In everyday perceptions, the visual information from biological motion and from the corresponding figurative contour patterns (the shape of the body) are intermingled. A method for studying information from the motion pattern per se without interference with the form aspect was devised. In short, the motion of the living body was represented by a few bright spots describing the motions of the main joints. It is found that 10–12 such elements in adequate motion combinations in proximal stimulus evoke a compelling impression of human walking, running, dancing, etc. The kinetic-geometric model for visual vector analysis originally developed in the study of perception of motion combinations of the mechanical type was applied to these biological motion patterns. The validity of this model in the present context was experimentally tested and the results turned out to be highly positive.",
"We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.",
"",
"",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification."
]
}
|
1811.08362
|
2900782452
|
We discuss the robustness and generalization ability in the realm of action recognition, showing that the mainstream neural networks are not robust to disordered frames and diverse video environments. There are two possible reasons: First, existing models lack an appropriate method to overcome the inevitable decision discrepancy between multiple streams with different input modalities. Second, by doing cross-dataset experiments, we find that the optical flow features are hard to be transferred, which affects the generalization ability of the two-stream neural networks. For robust action recognition, we present the Reversed Two-Stream Networks (Rev2Net) which has three properties: (1) It could learn more transferable, robust video features by reversing the multi-modality inputs as training supervisions. It outperforms all other compared models in challenging frames shuffle experiments and cross-dataset experiments. (2) It is highlighted by an adaptive, collaborative multi-task learning approach that is applied between decoders to penalize their disagreement in the deep feature space. We name it the decoding discrepancy penalty (DDP). (3) As the decoder streams will be removed at test time, Rev2Net makes recognition decisions purely based on raw video frames. Rev2Net achieves the best results in the cross-dataset settings and competitive results on classic action recognition tasks: 94.6 for UCF-101, 71.1 for HMDB-51 and 73.3 for Kinetics. It performs even better than most methods who take extra inputs beyond raw RGB frames.
|
The two-stream networks were first introduced to the deep video classification models by Simonyan @cite_24 which model short temporal snapshots of videos by averaging the predictions from a single RGB frame and a stack of computed optical flow frames. Noticing that RGB images could not fully exploit temporal cues, they extracted the optical flow from consecutive video frames and took them as the network inputs. The optical flow stream has brought in a significant performance gain. Since then, two-stream networks have been widely employed by many video action recognition models @cite_11 @cite_0 @cite_18 @cite_23 @cite_25 , including some 3D CNN models @cite_31 @cite_4 @cite_6 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_11"
],
"mid": [
"2342662179",
"2761659801",
"2773514261",
"2952186347",
"",
"2950971447",
"2619082050",
"2770465006",
"1744759976"
],
"abstract": [
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.",
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"In this paper we study 3D convolutional networks for video understanding tasks. Our starting point is the state-of-the-art I3D model, which \"inflates\" all the 2D filters of the Inception architecture to 3D. We first consider \"deflating\" the I3D model at various levels to understand the role of 3D convolutions. Interestingly, we found that 3D convolutions at the top layers of the network contribute more than 3D convolutions at the bottom layers, while also being computationally more efficient. This indicates that I3D is better at capturing high-level temporal patterns than low-level motion signals. We also consider replacing 3D convolutions with spatiotemporal-separable 3D convolutions (i.e., replacing convolution using a k * k * k filter with 1 * k * k followed by k * 1 * 1 filters); we show that such a model, which we call S3D, is 1.5x more computationally efficient (in terms of FLOPS) than I3D, and achieves better accuracy. Finally, we explore spatiotemporal feature gating on top of S3D. The resulting model, which we call S3D-G, outperforms the state-of-the-art I3D model by 3.5 accuracy on Kinetics and reduces the FLOPS by 34 . It also achieves a new state-of-the-art performance when transferred to other action classification (UCF-101 and HMDB-51) and detection (UCF-101 and JHMDB) datasets.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.",
"Spatiotemporal feature learning in videos is a fundamental problem in computer vision. This paper presents a new architecture, termed as Appearance-and-Relation Network (ARTNet), to learn video representation in an end-to-end manner. ARTNets are constructed by stacking multiple generic building blocks, called as SMART, whose goal is to simultaneously model appearance and relation from RGB input in a separate and explicit manner. Specifically, SMART blocks decouple the spatiotemporal learning module into an appearance branch for spatial modeling and a relation branch for temporal modeling. The appearance branch is implemented based on the linear combination of pixels or filter responses in each frame, while the relation branch is designed based on the multiplicative interactions between pixels or filter responses across multiple frames. We perform experiments on three action recognition benchmarks: Kinetics, UCF101, and HMDB51, demonstrating that SMART blocks obtain an evident improvement over 3D convolutions for spatiotemporal feature learning. Under the same training setting, ARTNets achieve superior performance on these three datasets to the existing state-of-the-art methods.",
"This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art."
]
}
|
1811.08362
|
2900782452
|
We discuss the robustness and generalization ability in the realm of action recognition, showing that the mainstream neural networks are not robust to disordered frames and diverse video environments. There are two possible reasons: First, existing models lack an appropriate method to overcome the inevitable decision discrepancy between multiple streams with different input modalities. Second, by doing cross-dataset experiments, we find that the optical flow features are hard to be transferred, which affects the generalization ability of the two-stream neural networks. For robust action recognition, we present the Reversed Two-Stream Networks (Rev2Net) which has three properties: (1) It could learn more transferable, robust video features by reversing the multi-modality inputs as training supervisions. It outperforms all other compared models in challenging frames shuffle experiments and cross-dataset experiments. (2) It is highlighted by an adaptive, collaborative multi-task learning approach that is applied between decoders to penalize their disagreement in the deep feature space. We name it the decoding discrepancy penalty (DDP). (3) As the decoder streams will be removed at test time, Rev2Net makes recognition decisions purely based on raw video frames. Rev2Net achieves the best results in the cross-dataset settings and competitive results on classic action recognition tasks: 94.6 for UCF-101, 71.1 for HMDB-51 and 73.3 for Kinetics. It performs even better than most methods who take extra inputs beyond raw RGB frames.
|
The 3D convolutions have been explored more than once @cite_14 @cite_9 @cite_3 . A recent boom started with C3D @cite_3 , a 3D version of VGGnet @cite_8 that contained 3D convolutions and the 3D pooling operating over space and time simultaneously. Intuitively, C3D realized a unified modeling of the spatiotemporal features. But a side effect comes with it, that the 3D convolutions bring an inevitable increase in the number of network parameters, making C3D hard to train. To solve this problem, Carreira @cite_31 proposed the Inflated 3D ConvNets (I3D). As its name, I3D inflates 2D convolutional filters into 3D, making the 3D models implicitly pre-trained on ImageNet. Other designs in the very recent year focused on how to reduce the memory footprint and ease the training process of 3D CNNs. P3D @cite_4 and S3D @cite_6 both replaced the @math convolutional filter plus a @math spatial convolution and a @math temporal convolution. T3D @cite_7 uses a stream 3D DenseNet with multi-depth temporal pooling layer in order to get lower model footprint compared to I3D. Those 3D CNNs models achieved state-of-the-art performance especially when being incorporated into the two-stream architecture.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_31"
],
"mid": [
"1586730761",
"2761659801",
"",
"1686810756",
"1983364832",
"2952633803",
"2773514261",
"2619082050"
],
"abstract": [
"We address the problem of learning good features for understanding video data. We introduce a model that learns latent representations of image sequences from pairs of successive images. The convolutional architecture of our model allows it to scale to realistic image sizes whilst using a compact parametrization. In experiments on the NORB dataset, we show our model extracts latent \"flow fields\" which correspond to the transformation between the pair of input frames. We also use our model to extract low-level motion features in a multi-stage architecture for action recognition, demonstrating competitive performance on both the KTH and Hollywood2 datasets.",
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"In this paper we study 3D convolutional networks for video understanding tasks. Our starting point is the state-of-the-art I3D model, which \"inflates\" all the 2D filters of the Inception architecture to 3D. We first consider \"deflating\" the I3D model at various levels to understand the role of 3D convolutions. Interestingly, we found that 3D convolutions at the top layers of the network contribute more than 3D convolutions at the bottom layers, while also being computationally more efficient. This indicates that I3D is better at capturing high-level temporal patterns than low-level motion signals. We also consider replacing 3D convolutions with spatiotemporal-separable 3D convolutions (i.e., replacing convolution using a k * k * k filter with 1 * k * k followed by k * 1 * 1 filters); we show that such a model, which we call S3D, is 1.5x more computationally efficient (in terms of FLOPS) than I3D, and achieves better accuracy. Finally, we explore spatiotemporal feature gating on top of S3D. The resulting model, which we call S3D-G, outperforms the state-of-the-art I3D model by 3.5 accuracy on Kinetics and reduces the FLOPS by 34 . It also achieves a new state-of-the-art performance when transferred to other action classification (UCF-101 and HMDB-51) and detection (UCF-101 and JHMDB) datasets.",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101."
]
}
|
1811.08362
|
2900782452
|
We discuss the robustness and generalization ability in the realm of action recognition, showing that the mainstream neural networks are not robust to disordered frames and diverse video environments. There are two possible reasons: First, existing models lack an appropriate method to overcome the inevitable decision discrepancy between multiple streams with different input modalities. Second, by doing cross-dataset experiments, we find that the optical flow features are hard to be transferred, which affects the generalization ability of the two-stream neural networks. For robust action recognition, we present the Reversed Two-Stream Networks (Rev2Net) which has three properties: (1) It could learn more transferable, robust video features by reversing the multi-modality inputs as training supervisions. It outperforms all other compared models in challenging frames shuffle experiments and cross-dataset experiments. (2) It is highlighted by an adaptive, collaborative multi-task learning approach that is applied between decoders to penalize their disagreement in the deep feature space. We name it the decoding discrepancy penalty (DDP). (3) As the decoder streams will be removed at test time, Rev2Net makes recognition decisions purely based on raw video frames. Rev2Net achieves the best results in the cross-dataset settings and competitive results on classic action recognition tasks: 94.6 for UCF-101, 71.1 for HMDB-51 and 73.3 for Kinetics. It performs even better than most methods who take extra inputs beyond raw RGB frames.
|
The optical flow is crucial for the performance of two-stream networks. For a better use of optical flow frames, some recent methods @cite_34 @cite_32 @cite_26 went beyond taking them as network inputs. They showed that training to generate optical flows with some deep networks, e.g. FlowNet @cite_15 and SpyNet @cite_21 , could improve the recognition performance. Sevilla-Lara @cite_34 also tried to interpret the correlations between the optical flow and action recognition results, regarding the CNN model as a black box''. They conducted an interesting experiment by shuffling the flow frames randomly before they were fed into a 2D CNN model called TSN @cite_23 . They argued that the power of optical flow mainly came from its invariance to the frame appearance, instead of the modeling capability to long-term motion cues. It is potentially true for these 2D CNNs since most of them are ensemble models across different sampling time stamps.
|
{
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_32",
"@cite_23",
"@cite_15",
"@cite_34"
],
"mid": [
"2774324727",
"2548527721",
"2604445072",
"2950971447",
"",
"2778191445"
],
"abstract": [
"Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. The derivation also provides theoretical support for using the difference between two frames. By directly calculating pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatiotemporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3 on UCF-101, which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96:0 and 74:2 accuracy on UCF-101 and HMDB-51 respectively. The code for this project is available at this https URL.",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong single-frame baselines in ImageNet VID, especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The proposed method, together with Deep Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The code is available at this https URL.",
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.",
"",
"Most of the top performing action recognition methods use optical flow as a \"black box\" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities."
]
}
|
1811.08015
|
2901108807
|
This paper introduces the problem of automatic font pairing. Font pairing is an important design task that is difficult for novices. Given a font selection for one part of a document (e.g., header), our goal is to recommend a font to be used in another part (e.g., body) such that the two fonts used together look visually pleasing. There are three main challenges in font pairing. First, this is a fine-grained problem, in which the subtle distinctions between fonts may be important. Second, rules and conventions of font pairing given by human experts are difficult to formalize. Third, font pairing is an asymmetric problem in that the roles played by header and body fonts are not interchangeable. To address these challenges, we propose automatic font pairing through learning visual relationships from large-scale human-generated font pairs. We introduce a new database for font pairing constructed from millions of PDF documents available on the Internet. We propose two font pairing algorithms: dual-space k-NN and asymmetric similarity metric learning (ASML). These two methods automatically learn fine-grained relationships from large-scale data. We also investigate several baseline methods based on the rules from professional designers. Experiments and user studies demonstrate the effectiveness of our proposed dataset and methods.
|
In multimedia and computer vision fields, several methods have been proposed for font recognition @cite_28 @cite_8 @cite_0 @cite_7 and font prediction @cite_13 based on large datasets of fonts and their images. @cite_0 @cite_7 train deep neural networks for font recognition. @cite_13 proposed multi-task deep neural networks to jointly predict font face, color and size for each text element on a web design, by considering multi-scale visual features and semantic tags of the web design. Our work is also related to systems for learning to parse web pages, such as WebZeitGeist @cite_26 . The most relevant to our work is by O' @cite_9 , who present interfaces for finding fonts based on learned models of font style. However, their work focuses only on single fonts in isolation, whereas we consider how two fonts pair with each other. Font pairing is also related to visual document analysis (e.g., @cite_14 @cite_12 ) and automatic generation of visual-textural presentation layout @cite_5 .
|
{
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2007644286",
"",
"2077532029",
"2963895734",
"2079479194",
"1981219830",
"2052378720",
"2271551547",
"",
""
],
"abstract": [
"Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.",
"",
"As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers. We study the Visual Font Recognition (VFR) problem [4] LFE, and advance the state-of-the-art remarkably by developing the DeepFont system. First of all, we build up the first available large-scale VFR dataset, named AdobeVFR, consisting of both labeled synthetic data and partially labeled real-world data. Next, to combat the domain mismatch between available training and testing data, we introduce a Convolutional Neural Network (CNN) decomposition approach, using a domain adaptation technique based on a Stacked Convolutional Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. Moreover, we study a novel learning-based model compression approach, in order to reduce the DeepFont model size without sacrificing its performance. The DeepFont system achieves an accuracy of higher than 80 (top-5) on our collected dataset, and also produces a good font similarity measure for font selection and suggestion. We also achieve around 6 times compression of the model without any visible loss of recognition accuracy.",
"We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This realto-synthetic domain gap caused poor generalization to new real data in previous methods ( (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional AutoEncoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80 (top-5) on a realworld dataset.",
"This paper addresses the large-scale visual font recogni- tion (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although vi- sual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset con- taining 2, 420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable so- lution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, lo- cal feature metric learning and max-margin template se- lection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algo- rithm can generalize to new classes and new data at lit- tle added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.",
"This paper presents interfaces for exploring large collections of fonts for design tasks. Existing interfaces typically list fonts in a long, alphabetically-sorted menu that can be challenging and frustrating to explore. We instead propose three interfaces for font selection. First, we organize fonts using high-level descriptive attributes, such as \"dramatic\" or \"legible.\" Second, we organize fonts in a tree-based hierarchical menu based on perceptual similarity. Third, we display fonts that are most similar to a user's currently-selected font. These tools are complementary; a user may search for \"graceful\" fonts, select a reasonable one, and then refine the results from a list of fonts similar to the selection. To enable these tools, we use crowdsourcing to gather font attribute data, and then train models to predict attribute values for new fonts. We use attributes to help learn a font similarity metric using crowdsourced comparisons. We evaluate the interfaces against a conventional list interface and find that our interfaces are preferred to the baseline. Our interfaces also produce better results in two real-world tasks: finding the nearest match to a target font, and font selection for graphic designs.",
"We develop the DeepFont system, a large-scale learning-based solution for automatic font identification, organization and selection. In this proposed technical demonstration, we will give our audience a tour to the DeepFont system, with the focus on its impacts on real consumer products, including but not limited to: 1) a cloud-based iOS App for font recognition; 2) a web-based tool for font similarity evaluation and discovery.",
"Visual-textual presentation layout (e.g., digital magazine cover, poster, Power Point slides, and any other rich media), which combines beautiful image and overlaid readable texts, can result in an eye candy touch to attract users’ attention. The designing of visual-textual presentation layout is therefore becoming ubiquitous in both commercially printed publications and online digital magazines. However, handcrafting aesthetically compelling layouts still remains challenging for many small businesses and amateur users. This article presents a system to automatically generate visual-textual presentation layouts by investigating a set of aesthetic design principles, through which an average user can easily create visually appealing layouts. The system is attributed with a set of topic-dependent layout templates and a computational framework integrating high-level aesthetic principles (in a top-down manner) and low-level image features (in a bottom-up manner). The layout templates, designed with prior knowledge from domain experts, define spatial layouts, semantic colors, harmonic color models, and font emotion and size constraints. We formulate the typography as an energy optimization problem by minimizing the cost of text intrusion, the utility of visual space, and the mismatch of information importance in perception and semantics, constrained by the automatically selected template and further preserving color harmonization. We demonstrate that our designs achieve the best reading experience compared with the reimplementation of parts of existing state-of-the-art designs through a series of user studies.",
"",
""
]
}
|
1811.08015
|
2901108807
|
This paper introduces the problem of automatic font pairing. Font pairing is an important design task that is difficult for novices. Given a font selection for one part of a document (e.g., header), our goal is to recommend a font to be used in another part (e.g., body) such that the two fonts used together look visually pleasing. There are three main challenges in font pairing. First, this is a fine-grained problem, in which the subtle distinctions between fonts may be important. Second, rules and conventions of font pairing given by human experts are difficult to formalize. Third, font pairing is an asymmetric problem in that the roles played by header and body fonts are not interchangeable. To address these challenges, we propose automatic font pairing through learning visual relationships from large-scale human-generated font pairs. We introduce a new database for font pairing constructed from millions of PDF documents available on the Internet. We propose two font pairing algorithms: dual-space k-NN and asymmetric similarity metric learning (ASML). These two methods automatically learn fine-grained relationships from large-scale data. We also investigate several baseline methods based on the rules from professional designers. Experiments and user studies demonstrate the effectiveness of our proposed dataset and methods.
|
In terms of methodology, our work is highly related to other visual pairing tasks, particularly pairing clothing @cite_1 @cite_15 @cite_21 @cite_23 @cite_3 @cite_22 , furniture @cite_19 , and food @cite_4 . Here we address font pairing, which entails particular difficulties including lack of an appropriate data source, fine-grained difference between font types and symmetrically pairing entities of the same category instead of different categories.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_15"
],
"mid": [
"2082396928",
"",
"2949988325",
"2027731328",
"2136340547",
"2090621143",
"2146851707",
"2221507685"
],
"abstract": [
"The cultural diversity of culinary practice, as illustrated by the variety of regional cuisines, raises the question of whether there are any general patterns that determine the ingredient combinations used in food today or principles that transcend individual tastes and recipes. We introduce a flavor network that captures the flavor compounds shared by culinary ingredients. Western cuisines show a tendency to use ingredient pairs that share many flavor compounds, supporting the so-called food pairing hypothesis. By contrast, East Asian cuisines tend to avoid compound sharing ingredients. Given the increasing availability of information on food preparation, our data-driven investigation opens new avenues towards a systematic understanding of culinary practice.",
"",
"In a modern recommender system, it is important to understand how products relate to each other. For example, while a user is looking for mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. These two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Here we develop a method to infer networks of substitutable and complementary products. We formulate this as a supervised link prediction task, where we learn the semantics of substitutes and complements from data associated with products. The primary source of data we use is the text of product reviews, though our method also makes use of features such as ratings, specifications, prices, and brands. Methodologically, we build topic models that are trained to automatically discover topics from text that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.",
"Humans inevitably develop a sense of the relationships between objects, some of which are based on their appearance. Some pairs of objects might be seen as being alternatives to each other (such as two pairs of jeans), while others may be seen as being complementary (such as a pair of jeans and a matching shirt). This information guides many of the choices that people make, from buying clothes to their interactions with each other. We seek here to model this human sense of the relationships between objects based on their appearance. Our approach is not based on fine-grained modeling of user annotations but rather on capturing the largest dataset possible and developing a scalable method for uncovering human notions of the visual relationships within. We cast this as a network inference problem defined on graphs of related images, and provide a large-scale dataset for the training and evaluation of the same. The system we develop is capable of recommending which clothes and accessories will go well together (and which will not), amongst a host of other applications.",
"In this paper, we aim at a practical system, magic closet, for automatic occasion-oriented clothing recommendation. Given a user-input occasion, e.g., wedding, shopping or dating, magic closet intelligently suggests the most suitable clothing from the user's own clothing photo album, or automatically pairs the user-specified reference clothing (upper-body or lower-body) with the most suitable one from online shops. Two key criteria are explicitly considered for the magic closet system. One criterion is to wear properly, e.g., compared to suit pants, it is more decent to wear a cocktail dress for a banquet occasion. The other criterion is to wear aesthetically, e.g., a red T-shirt matches better white pants than green pants. To narrow the semantic gap between the low-level features of clothing and the high-level occasion categories, we adopt middle-level clothing attributes (e.g., clothing category, color, pattern) as a bridge. More specifically, the clothing attributes are treated as latent variables in our proposed latent Support Vector Machine (SVM) based recommendation model. The wearing properly criterion is described in the model through a feature-occasion potential and an attribute-occasion potential, while the wearing aesthetically criterion is expressed by an attribute-attribute potential. To learn a generalize-well model and comprehensively evaluate it, we collect a large clothing What-to-Wear (WoW) dataset, and thoroughly annotate the whole dataset with 7 multi-value clothing attributes and 10 occasion categories via Amazon Mechanic Turk. Extensive experiments on the WoW dataset demonstrate the effectiveness of the magic closet system for both occasion-oriented clothing recommendation and pairing.",
"This paper presents a method for learning to predict the stylistic compatibility between 3D furniture models from different object classes: e.g., how well does this chair go with that table? To do this, we collect relative assessments of style compatibility using crowdsourcing. We then compute geometric features for each 3D model and learn a mapping of them into a space where Euclidean distances represent style incompatibility. Motivated by the geometric subtleties of style, we introduce part-aware geometric feature vectors that characterize the shapes of different parts of an object separately. Motivated by the need to compute style compatibility between different object classes, we introduce a method to learn object class-specific mappings from geometric features to a shared feature space. During experiments with these methods, we find that they are effective at predicting style compatibility agreed upon by people. We find in user studies that the learned compatibility metric is useful for novel interactive tools that: 1) retrieve stylistically compatible models for a query, 2) suggest a piece of furniture for an existing scene, and 3) help guide an interactive 3D modeler towards scenes with compatible furniture.",
"We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like 'What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data, in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together."
]
}
|
1811.07984
|
2901241112
|
We propose an emission-oriented charging scheme to evaluate the emissions of electric vehicle (EV) charging from the electricity sector at the region of Electric Reliability Council of Texas (ERCOT). We investigate both day- and night-charging scenarios combined with realistic system load demand under the emission-oriented vs direct charging schemes. Our emission-oriented charging scheme reduces carbon emissions in the day by 13.8 on average. We also find that emission-oriented charging results in a significant CO2 reduction in 30 of the days in a year compared with direct charging. Apart from offering a flat rebate for EV owners, our analysis reveals that certain policy incentives (e.g. pricing) regarding EV charging should be taken into account in order to reflect the benefits of emissions reduction that haven't been incorporated in the current market of electricity transactions.
|
Myriad studies discuss the environmental impacts of EV charging, but most neglect the fact that power plants are heterogeneous and there are different emission rates associated with these plants. Several reports that analyze the benefits of EV adoption assume that the electricity used to recharge the vehicles is generated from a particular type of power plant @cite_3 . Their studies conclude that cleaner power plants yield greater environmental benefits, which is as expected. These analyses are not useful in understanding the sensitivity in emission with respect to changes in EV charging demand at specific locations on the grid and times-of-the-day. A typical study in @cite_4 uses a regression method to estimate marginal emissions across different regions in the U.S. and the specific times of day. Although this is an improved method in quantifying GHG emission, it has drawbacks in that, for example, the marginal emission rate does not depend on the electricity demand estimated from regression. An another relevant work is that @cite_2 evaluate standard, delayed, off-peak, and continuous charging on emissions for Xcel in Colorado, which can be considered as an early attempt to understand the emission results due to various charging schemes.
|
{
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_2"
],
"mid": [
"2311068044",
"",
"1560793205"
],
"abstract": [
"Conventional generation units encounter a changing role in modern societies’ energy supply. With increased need for flexible operation, engineers and project managers have to evaluate the benefits of technical improvements. For this purpose, a valuation tool has been developed, comparing economical cornerstones and technical constraints of generation units to European Energy Exchange prices for PHELIX 2014. It enables the user to relate a change in technical parameters to an economic effect and possible revenues. Four different types of conventional power plants are investigated in scenarios with increasing CO2 and fuel prices to determine the impact of different flexibility options. Results show that an increased ramp rate has not the same magnitude of positive economic impact as reduced minimum operation load, based on an observation on a price signal with resolution of fifteen minutes.",
"",
"The combination of high oil costs, concerns about oil security and availability, and air quality issues related to vehicle emissions are driving interest in plug-in hybrid electric vehicles (PHEVs). PHEVs are similar to conventional hybrid electric vehicles, but feature a larger battery and plug-in charger that allows electricity from the grid to replace a portion of the petroleum-fueled drive energy. PHEVs may derive a substantial fraction of their miles from grid-derived electricity, but without the range restrictions of pure battery electric vehicles. As of early 2007, production of PHEVs is essentially limited to demonstration vehicles and prototypes. However, the technology has received considerable attention from the media, national security interests, environmental organizations, and the electric power industry. The use of PHEVs would represent a significant potential shift in the use of electricity and the operation of electric power systems. Electrification of the transportation sector could increase generation capacity and transmission and distribution (T&D) requirements, especially if vehicles are charged during periods of high demand. This study is designed to evaluate several of these PHEV-charging impacts on utility system operations within the Xcel Energy Colorado service territory."
]
}
|
1811.08055
|
2900906990
|
Nowadays, multivariate time series data are increasingly collected in various real world systems, e.g., power plants, wearable devices, etc. Anomaly detection and diagnosis in multivariate time series refer to identifying abnormal status in certain time steps and pinpointing the root causes. Building such a system, however, is challenging since it not only requires to capture the temporal dependency in each time series, but also need encode the inter-correlations between different pairs of time series. In addition, the system should be robust to noise and provide operators with different levels of anomaly scores based upon the severity of different incidents. Despite the fact that a number of unsupervised anomaly detection algorithms have been developed, few of them can jointly address these challenges. In this paper, we propose a Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED), to perform anomaly detection and diagnosis in multivariate time series data. Specifically, MSCRED first constructs multi-scale (resolution) signature matrices to characterize multiple levels of the system statuses in different time steps. Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns. Finally, based upon the feature maps which encode the inter-sensor correlations and temporal information, a convolutional decoder is used to reconstruct the input signature matrices and the residual signature matrices are further utilized to detect and diagnose anomalies. Extensive empirical studies based on a synthetic dataset and a real power plant dataset demonstrate that MSCRED can outperform state-of-the-art baseline methods.
|
One traditional type is the distance methods @cite_26 @cite_8 . For instance, the @math -Nearest Neighbor (kNN) algorithm @cite_26 computes the anomaly score of each data sample based on the average distance to its @math nearest neighbors. Similarly, the clustering models @cite_2 @cite_27 cluster different data samples and find anomalies via a predefined outlierness score. In addition, the classification methods, , One-Class SVM @cite_29 , models the density distribution of training data and classifies new data as normal or abnormal. Although these methods have demonstrated their effectiveness in various applications, they may not work well on multivariate time series since they cannot capture the temporal dependencies appropriately. To address this issue, temporal prediction methods, , Autoregressive Moving Average (ARMA) @cite_5 and its variants @cite_31 , have been used to model temporal dependency and perform anomaly detection. However, these models are sensitive to noise and thus may increase false positive results when noise is severe. Other traditional methods include correlation methods @cite_20 , ensemble methods @cite_15 ,
|
{
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_15",
"@cite_29",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_20"
],
"mid": [
"1988833430",
"2135039490",
"2056081083",
"",
"2180566385",
"2019014808",
"2104353081",
"",
"2045765911"
],
"abstract": [
"We present an outlier detection using indegree number (ODIN) algorithm that utilizes k-nearest neighbour graph. Improvements to existing kNN distance-based method are also proposed. We compare the methods with real and synthetic datasets. The results show that the proposed method achieves reasonable results with synthetic data and outperforms compared methods with real data sets with small number of observations.",
"This paper addresses the task of change analysis of correlated multi-sensor systems. The goal of change analysis is to compute the anomaly score of each sensor when we know that the system has some potential difference from a reference state. Examples include validating the proper performance of various car sensors in the automobile industry. We solve this problem based on a neighborhood preservation principle - If the system is working normally, the neighborhood graph of each sensor is almost invariant against the fluctuations of experimental conditions. Here a neighborhood graph is defined based on the correlation between sensor signals. With the notion of stochastic neighborhood, our method is capable of robustly computing the anomaly score of each sensor under conditions that are hard to be detected by other naive methods.",
"Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm.",
"",
"An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods.",
"In this paper, we present a new definition for outlier: cluster-based local outlier, which is meaningful and provides importance to the local data behavior. A measure for identifying the physical significance of an outlier is designed, which is called cluster-based local outlier factor (CBLOF). We also propose the FindCBLOF algorithm for discovering outliers. The experimental results show that our approach outperformed the existing methods on identifying meaningful and interesting outliers.",
"Preface1Difference Equations12Lag Operators253Stationary ARMA Processes434Forecasting725Maximum Likelihood Estimation1176Spectral Analysis1527Asymptotic Distribution Theory1808Linear Regression Models2009Linear Systems of Simultaneous Equations23310Covariance-Stationary Vector Processes25711Vector Autoregressions29112Bayesian Analysis35113The Kalman Filter37214Generalized Method of Moments40915Models of Nonstationary Time Series43516Processes with Deterministic Time Trends45417Univariate Processes with Unit Roots47518Unit Roots in Multivariate Time Series54419Cointegration57120Full-Information Maximum Likelihood Analysis of Cointegrated Systems63021Time Series Models of Heteroskedasticity65722Modeling Time Series with Changes in Regime677A Mathematical Review704B Statistical Tables751C Answers to Selected Exercises769D Greek Letters and Mathematical Symbols Used in the Text786Author Index789Subject Index792",
"",
"In this paper, we propose a novel outlier detection model to find outliers that deviate from the generating mechanisms of normal instances by considering combinations of different subsets of attributes, as they occur when there are local correlations in the data set. Our model enables to search for outliers in arbitrarily oriented subspaces of the original feature space. We show how in addition to an outlier score, our model also derives an explanation of the outlierness that is useful in investigating the results. Our experiments suggest that our novel method can find different outliers than existing work and can be seen as a complement of those approaches."
]
}
|
1811.08055
|
2900906990
|
Nowadays, multivariate time series data are increasingly collected in various real world systems, e.g., power plants, wearable devices, etc. Anomaly detection and diagnosis in multivariate time series refer to identifying abnormal status in certain time steps and pinpointing the root causes. Building such a system, however, is challenging since it not only requires to capture the temporal dependency in each time series, but also need encode the inter-correlations between different pairs of time series. In addition, the system should be robust to noise and provide operators with different levels of anomaly scores based upon the severity of different incidents. Despite the fact that a number of unsupervised anomaly detection algorithms have been developed, few of them can jointly address these challenges. In this paper, we propose a Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED), to perform anomaly detection and diagnosis in multivariate time series data. Specifically, MSCRED first constructs multi-scale (resolution) signature matrices to characterize multiple levels of the system statuses in different time steps. Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns. Finally, based upon the feature maps which encode the inter-sensor correlations and temporal information, a convolutional decoder is used to reconstruct the input signature matrices and the residual signature matrices are further utilized to detect and diagnose anomalies. Extensive empirical studies based on a synthetic dataset and a real power plant dataset demonstrate that MSCRED can outperform state-of-the-art baseline methods.
|
Besides traditional methods, deep learning based unsupervised anomaly detection algorithms @cite_24 @cite_7 @cite_14 @cite_13 have gained a lot attention recently. For instance, Deep Autoencoding Gaussian Mixture Model (DAGMM) @cite_13 jointly considers deep auto-encoder and Gaussian mixture model to model density distribution of multi-dimensional data. LSTM encoder-decoder @cite_24 @cite_23 models time series temporal dependency by LSTM networks and achieves better generalization capability than traditional methods. Despite their effectiveness, they cannot jointly consider the temporal dependency, noise resistance, and the interpretation of severity of anomalies.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_24",
"@cite_23",
"@cite_13"
],
"mid": [
"2743138268",
"2398119937",
"",
"2952042565",
"2786088545"
],
"abstract": [
"Deep autoencoders, and other deep neural networks, have demonstrated their effectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders' ability to discover high quality, non-linear features but can also eliminate outliers and noise without access to any clean training data. Our model is inspired by Robust Principal Component Analysis, and we split the input data X into two parts, @math , where @math can be effectively reconstructed by a deep autoencoder and @math contains the outliers and noise in the original data X. Since such splitting increases the robustness of standard deep autoencoders, we name our model a \"Robust Deep Autoencoder (RDA)\". Further, we present generalizations of our results to grouped sparsity norms which allow one to distinguish random anomalies from other types of structured corruptions, such as a collection of features being corrupted across many instances or a collection of instances having more corruptions than their fellows. Such \"Group Robust Deep Autoencoders (GRDA)\" give rise to novel anomaly detection approaches whose superior performance we demonstrate on a selection of benchmark problems.",
"In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures. We propose deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure. We develop novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data, and apply appropriate model architectures to adapt to the data structure. Our training algorithm is built upon the recent development of score matching sm , which connects an EBM with a regularized autoencoder, eliminating the need for complicated sampling method. Statistically sound decision criterion can be derived for anomaly detection purpose from the perspective of the energy landscape of the data distribution. We investigate two decision criteria for performing anomaly detection: the energy score and the reconstruction error. Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods.",
"",
"The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.",
"Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14 improvement based on the standard F1 score."
]
}
|
1811.08055
|
2900906990
|
Nowadays, multivariate time series data are increasingly collected in various real world systems, e.g., power plants, wearable devices, etc. Anomaly detection and diagnosis in multivariate time series refer to identifying abnormal status in certain time steps and pinpointing the root causes. Building such a system, however, is challenging since it not only requires to capture the temporal dependency in each time series, but also need encode the inter-correlations between different pairs of time series. In addition, the system should be robust to noise and provide operators with different levels of anomaly scores based upon the severity of different incidents. Despite the fact that a number of unsupervised anomaly detection algorithms have been developed, few of them can jointly address these challenges. In this paper, we propose a Multi-Scale Convolutional Recurrent Encoder-Decoder (MSCRED), to perform anomaly detection and diagnosis in multivariate time series data. Specifically, MSCRED first constructs multi-scale (resolution) signature matrices to characterize multiple levels of the system statuses in different time steps. Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns. Finally, based upon the feature maps which encode the inter-sensor correlations and temporal information, a convolutional decoder is used to reconstruct the input signature matrices and the residual signature matrices are further utilized to detect and diagnose anomalies. Extensive empirical studies based on a synthetic dataset and a real power plant dataset demonstrate that MSCRED can outperform state-of-the-art baseline methods.
|
In addition, our model design is inspired by fully convolutional neural networks @cite_30 , convolutional LSTM networks @cite_22 , and attention technique @cite_6 @cite_33 . This paper is also related to other time series applications such as clustering classification @cite_25 @cite_1 @cite_11 , segmentation @cite_17 @cite_19 , and so on.
|
{
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_22",
"@cite_1",
"@cite_17",
"@cite_6",
"@cite_19",
"@cite_25",
"@cite_11"
],
"mid": [
"2952632681",
"",
"1485009520",
"2952668258",
"2107633943",
"2133564696",
"1801768344",
"2181643798",
""
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"",
"The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.",
"Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios.",
"In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Time series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.",
"Given a motion capture sequence, how to identify the category of the motion? Classifying human motions is a critical task in motion editing and synthesizing, for which manual labeling is clearly inefficient for large databases. Here we study the general problem of time series clustering. We propose a novel method of clustering time series that can (a) learn joint temporal dynamics in the data; (b) handle time lags; and (c) produce interpretable features. We achieve this by developing complex-valued linear dynamical systems (CLDS), which include real-valued Kalman filters as a special case; our advantage is that the transition matrix is simpler (just diagonal), and the transmission one easier to interpret. We then present Complex-Fit, a novel EM algorithm to learn the parameters for the general model and its special case for clustering. Our approach produces significant improvement in clustering quality, 1.5 to 5 times better than well-known competitors on real motion capture sequences.",
""
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
Lu al @cite_5 attempt to independently predict object and relationship categories using a visual module, and fine-tune the likelihood of relationship prediction by leveraging language priors from @cite_23 word embeddings. However, @cite_5 ignores the surrounding context to infer individual components of a scene graph in isolation. Nevertheless, individual predictions of object instances and relationships can largely benefit from their surrounding context.
|
{
"cite_N": [
"@cite_5",
"@cite_23"
],
"mid": [
"2479423890",
"2950133940"
],
"abstract": [
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
Instead of independently predicting object instances and relationships categories, Xu al @cite_35 investigate the problem of relationship reasoning by jointly inference relationship with its surrounding context according to the topological structures of scene graphs, , fine-tuning the visual features for each node in the scene graph by leveraging information from its surrounding context. Although the performance of scene graph generation is improved compared with @cite_5 , our observation suggests that @cite_35 is likely to confuse subjects from objects, and its performance decreases dramatically when facing real-world images with complex relationships and a lot of ambiguous entities.
|
{
"cite_N": [
"@cite_35",
"@cite_5"
],
"mid": [
"2579549467",
"2479423890"
],
"abstract": [
"Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset.",
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval."
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
More existing works are developed upon @cite_35 . Li al @cite_28 jointly train the scene graph generation model @cite_35 with an image caption model to capture the semantic levels mutual connections between scene graph generation task and image caption. Li al @cite_13 further propose to enable message passing within convolutional layers @cite_35 , to capture the lower-level visual features for relationship prediction.
|
{
"cite_N": [
"@cite_28",
"@cite_35",
"@cite_13"
],
"mid": [
"2963649796",
"2579549467",
"2605736949"
],
"abstract": [
"Object detection, scene graph generation and region captioning, which are three scene understanding tasks at different semantic levels, are tied together: scene graphs are generated on top of objects detected in an image with their pairwise relationship predicted, while region captioning gives a language description of the objects, their attributes, relations and other context information. In this work, to leverage the mutual connections across semantic levels, we propose a novel neural network model, termed as Multi-level Scene Description Network (denoted as MSDN), to solve the three vision tasks jointly in an end-to-end manner. Object, phrase, and caption regions are first aligned with a dynamic graph based on their spatial and semantic connections. Then a feature refining structure is used to pass messages across the three levels of semantic tasks through the graph. We benchmark the learned model on three tasks, and show the joint learning across three tasks with our proposed method can bring mutual improvements over previous models. Particularly, on the scene graph generation task, our proposed method outperforms the stateof- art method with more than 3 margin. Code has been made publicly available.",
"Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset.",
"As the intermediate level task connecting image captioning and object detection, visual relationship detection started to catch researchers attention because of its descriptive power and clear structure. It detects the objects and captures their pair-wise interactions with a subject-predicate-object triplet, e.g. person-ride-horse. In this paper, each visual relationship is considered as a phrase with three components. We formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase guided Convolutional Neural Network (ViP-CNN) to address them simultaneously. In ViP-CNN, we present a Phrase-guided Message Passing Structure (PMPS) to establish the connection among relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimental results show that our ViP-CNN outperforms the state-of-art method both in speed and accuracy. We further pretrain ViP-CNN on our cleansed Visual Genome Relationship dataset, which is found to perform better than the pretraining on the ImageNet for this task."
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
Conditional Random Fields (CRFs), a classical tool for modeling complex structures consisting of a large number of interrelated parts, has been used extensively in graph inference. The key idea of using CRFs for graph inference is to incorporate dependencies between vertices in a graph. Much effort has been expended on image segmentation @cite_4 @cite_24 @cite_21 , named-entity recognition @cite_1 @cite_3 and image retrieval @cite_0 using CRFs.
|
{
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_0"
],
"mid": [
"2161236525",
"2951729963",
"2141099517",
"2949240516",
"",
"2077069816"
],
"abstract": [
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.",
"By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions.",
"Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; , 1998).",
"State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.",
"",
"This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods."
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
Kr "a henb "u hl al @cite_4 propose an efficient CRFs mean-field approximate inference algorithm for image segmentation. They model each image as a fully connected grid graph and use CRFs to refine segmentation results obtained from Fully Convolutional Network @cite_33 . Zheng al @cite_24 @cite_9 combines the strengths of CNNs with CRFs , and formulate mean-field inference as Recurrent Neural Networks. In the mean time, CRFs reasoning is widely used to classify named object instances @cite_1 @cite_3 in text into pre-defined categories. Inspired by the great success of CRFs in image segmentation and named-entity recognition, Johnson al @cite_0 design a CRFs model that reasons about the connections between an image and its ground-truth scene graph, and use these scene graphs as queries to retrieve images with similar semantic meanings.
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_0"
],
"mid": [
"2161236525",
"2952632681",
"",
"2141099517",
"2949240516",
"",
"2077069816"
],
"abstract": [
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"",
"Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; , 1998).",
"State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.",
"",
"This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods."
]
}
|
1811.08075
|
2901338647
|
Despite the great success object detection and segmentation models have achieved in recognizing individual objects in images, performance on cognitive tasks such as image caption, semantic image retrieval, and visual QA is far from satisfactory. To achieve better performance on these cognitive tasks, merely recognizing individual object instances is insufficient. Instead, the interactions between object instances need to be captured in order to facilitate reasoning and understanding of the visual scenes in an image. Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image. However, existing techniques on scene graph generation fail to distinguish subjects and objects in the visual scenes of images and thus do not perform well with real-world datasets where exist ambiguous object instances. In this work, we propose a novel scene graph generation model for predicting object instances and its corresponding relationships in an image. Our model, SG-CRF, learns the sequential order of subject and object in a relationship triplet, and the semantic compatibility of object instance nodes and relationship nodes in a scene graph efficiently. Experiments empirically show that SG-CRF outperforms the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD, and Visual Genome, raising the Recall@100 from 24.99 to 49.95 , from 41.92 to 50.47 , and from 54.69 to 54.77 , respectively.
|
Our work is related to @cite_35 in that we also employ message passing to generate scene graphs. The critical difference is how we use message passing. @cite_35 use message passing to iteratively fine-tune the features of each node in the scene graph in visual features via the Recurrent Neural Network. The performance decrease greatly after two iterations because noises are aggregated in visual features as the number of iterations increases. Instead, we use a message passing to capture the semantic compatibility in the word semantic level. The performance of is monotonically improved and converge to the optimal in an average of @math iterations on real-world datasets @cite_12 .
|
{
"cite_N": [
"@cite_35",
"@cite_12"
],
"mid": [
"2579549467",
"2277195237"
],
"abstract": [
"Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs."
]
}
|
1811.07993
|
2901271689
|
We propose a novel Generalized Zero-Shot learning (GZSL) method that is agnostic to both unseen images and unseen semantic vectors during training. Prior works in this context propose to map high-dimensional visual features to the semantic domain, we believe contributes to the semantic gap. To bridge the gap, we propose a novel low-dimensional embedding of visual instances that is "visually semantic." Analogous to semantic data that quantifies the existence of an attribute in the presented instance, components of our visual embedding quantifies existence of a prototypical part-type in the presented instance. In parallel, as a thought experiment, we quantify the impact of noisy semantic data by utilizing a novel visual oracle to visually supervise a learner. These factors, namely semantic noise, visual-semantic gap and label noise lead us to propose a new graphical model for inference with pairwise interactions between label, semantic data, and inputs. We tabulate results on a number of benchmark datasets demonstrating significant improvement in accuracy over state-of-the-art under both semantic and visual supervision.
|
In this context our approach bears some similarities to @cite_14 and @cite_10 . In particular, @cite_14 propose zoom-net as a means to filter-out redundant visual features such as deleting background and focus attention on important locations of an object. @cite_10 further extend this insight and propose visual part detector (VPDE-Net) and utilize high-dimensional part feature vectors as an input for semantic transfer. @cite_10 's proposal is to incorporate the resulting reduced representation as a means to synthesize unseen examples leveraging knowledge of unseen class attributes. Different from these works we develop methods to learn a statistical representation of mixture proportions of latent parts. Apart from being low-dimensional the mixture proportions intuitively capture underlying similarity of a part-type to other such part-types found in other classes. The focus of semantic mapping is then to transfer knowledge between mixture proportion of part types and semantic similarity.
|
{
"cite_N": [
"@cite_14",
"@cite_10"
],
"mid": [
"2962716320",
"2799215068"
],
"abstract": [
"Zero-shot learning (ZSL) aims to recognize unseen image categories by learning an embedding space between image and semantic representations. For years, among existing works, it has been the center task to learn the proper mapping matrices aligning the visual and semantic space, whilst the importance to learn discriminative representations for ZSL is ignored. In this work, we retrospect existing methods and demonstrate the necessity to learn discriminative representations for both visual and semantic instances of ZSL. We propose an end-to-end network that is capable of 1) automatically discovering discriminative regions by a zoom network; and 2) learning discriminative semantic representations in an augmented space introduced for both user-defined and latent attributes. Our proposed method is tested extensively on two challenging ZSL datasets, and the experiment results show that the proposed method significantly outperforms state-of-the-art methods.",
"Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning."
]
}
|
1811.08106
|
2900877758
|
In this paper, we investigate the Chinese font synthesis problem and propose a Pyramid Embedded Generative Adversarial Network (PEGAN) to automatically generate Chinese character images. The PEGAN consists of one generator and one discriminator. The generator is built using one encoder-decoder structure with cascaded refinement connections and mirror skip connections. The cascaded refinement connections embed a multiscale pyramid of downsampled original input into the encoder feature maps of different layers, and multi-scale feature maps from the encoder are connected to the corresponding feature maps in the decoder to make the mirror skip connections. Through combining the generative adversarial loss, pixel-wise loss, category loss and perceptual loss, the generator and discriminator can be trained alternately to synthesize character images. In order to verify the effectiveness of our proposed PEGAN, we first build one evaluation set, in which the characters are selected according to their stroke number and frequency of use, and then use both qualitative and quantitative metrics to measure the performance of our model comparing with the baseline method. The experimental results demonstrate the effectiveness of our proposed model, it shows the potential to automatically extend small font banks into complete ones.
|
Stroke is the basic unit of Chinese characters. The stroke extraction based methods are the dominant way to generate font in early stage. In @cite_7 , the strokes of both source font and target font are first extracted. Then autoencoder and self-organizing maps are adopted to cluster the extracted strokes into 100 different groups. Finally, new characters of target font are generated by stroke replacement. In @cite_8 , one automatic extrapolation method for small font is proposed. The strokes of characters from one small subset are first extracted to form a stroke pool. In addition, one transformation matrix between source font and target font is learnt by the corresponding relationship between their skeletons. The missing target characters are generated based on strokes and transformed skeletons. This kind of methods mainly rely on stroke extraction. Moreover, the stroke extraction of calligraphic font is still one challenging problem.
|
{
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2144016354",
"2582695981"
],
"abstract": [
"Given the large number and complexity of Chinese characters, pattern matching based on structural decomposition and analysis is believed to be necessary and essential to off-line character recognition. The paper proposes a model of stroke extraction for Chinese characters. One problem for stroke extraction is how to extract primary strokes. Another major problem is to solve the segmentation ambiguities at intersection points. We use the degree information and the stroke continuation property to tackle these two problems. The proposed model can be used to extract strokes from both printed and handwritten character images.",
"This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters."
]
}
|
1811.08040
|
2901391039
|
Extractive summarization is very useful for physicians to better manage and digest Electronic Health Records (EHRs). However, the training of a supervised model requires disease-specific medical background and is thus very expensive. We studied how to utilize the intrinsic correlation between multiple EHRs to generate pseudo-labels and train a supervised model with no external annotation. Experiments on real-patient data validate that our model is effective in summarizing crucial disease-specific information for patients.
|
Researchers also put lots of efforts in better managing utilizing information from EHRs. Information Extraction: @cite_4 used machine learning and keyword searches method to extract information from text to improve case detection. @cite_6 utilized critical care flow sheet data to develop regression model for predictions on unplanned extubation. @cite_8 proposed to automatically generate records conditioned on varied data sources such as demographics and lab test results.
|
{
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_8"
],
"mid": [
"2283041611",
"2885568805",
""
],
"abstract": [
"Background: Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods: A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results: Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78 (codes + text) vs 62 (codes), P = .03; median area under the receiver operating characteristic 95 (codes + text) vs 88 (codes), P = .025). Conclusions: Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall).",
"Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient's medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.",
""
]
}
|
1811.08073
|
2901385052
|
Person re-identification (ReID) is aimed at identifying the same person across videos captured from different cameras. In the view that networks extracting global features using ordinary network architectures are difficult to extract local features due to their weak attention mechanisms, researchers have proposed a lot of elaborately designed ReID networks, while greatly improving the accuracy, the model size and the feature extraction latency are also soaring. We argue that a relatively compact ordinary network extracting globally pooled features has the capability to extract discriminative local features and can achieve state-of-the-art precision if only the model's parameters are properly learnt. In order to reduce the difficulty in learning hard identity labels, we propose a novel knowledge distillation method: Factorized Distillation, which factorizes both feature maps and retrieval features of holistic ReID network to mimic representations of multiple partial ReID models, thus transferring the knowledge from partial ReID models to the holistic network. Experiments show that the performance of model trained with the proposed method can outperform state-of-the-art with relatively few network parameters.
|
Face verification is a technology field similar to ReID. One of @cite_28 's experiment employed concatenated features as targets which are extracted by an ensemble of regional face models to train the student with regression loss. the concatenated features can be shortened thanks to neuron selection taking use of face attributes dataset. Although neuron selection is very effective, the feature dimension of student is doomed to grow after adding a new teacher to the ensemble, limiting the scalability of the training system. Different from @cite_28 , we utilize feature enhancement to reduce intro-identity variance, and use feature factorization to make student's feature dimension do not grow with the number of teachers.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2543539599"
],
"abstract": [
"The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6× compression ratio and 90× speed-up in inference, making this cumbersome model applicable on portable devices."
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA @cite_4 , entailment @cite_7 , sentiment @cite_25 , and ellipsis and coreference @cite_17 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., @cite_6 , our dataset specifically focuses on them so their challenges can be studied.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_25",
"@cite_17"
],
"mid": [
"2427527485",
"2953084091",
"2794325560",
"2113459411",
""
],
"abstract": [
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0 , a significant improvement over a simple baseline (20 ). However, human performance (86.8 ) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",
"Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.",
"We present a new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering. Together, these constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI. The ARC question set is partitioned into a Challenge Set and an Easy Set, where the Challenge Set contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurence algorithm. The dataset contains only natural, grade-school science questions (authored for human tests), and is the largest public-domain set of this kind (7,787 questions). We test several baselines on the Challenge Set, including leading neural models from the SQuAD and SNLI tasks, and find that none are able to significantly outperform a random baseline, reflecting the difficult nature of this task. We are also releasing the ARC Corpus, a corpus of 14M science sentences relevant to the task, and implementations of the three neural baseline models tested. Can your model perform better? We pose ARC as a challenge to the community.",
"Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area.",
""
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history @cite_32 @cite_27 @cite_0 @cite_30 , using datasets about geography @cite_32 , travel booking @cite_3 , factoid QA over knowledge bases @cite_0 , Wikipedia tables @cite_5 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena.
|
{
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_5"
],
"mid": [
"2757361303",
"2163274265",
"2613624942",
"2252136820",
"2547185913",
"2101964891"
],
"abstract": [
"",
"This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application.",
"",
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.",
"What capabilities are required for an AI system to pass standard 4th Grade Science Tests? Previous work has examined the use of Markov Logic Networks (MLNs) to represent the requisite background knowledge and interpret test questions, but did not improve upon an information retrieval (IR) baseline. In this paper, we describe an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results. We evaluate the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam (using only non-diagram, multiple choice questions), and show that our overall system's score is 71.3 , an improvement of 23.8 (absolute) over the MLN-based method described in previous work. We conclude with a detailed analysis, illustrating the complementary strengths of each method in the ensemble. Our datasets are being released to enable further research.",
"Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. While existing work trades off one aspect for another, this paper simultaneously makes progress on both fronts through a new task: answering complex questions on semi-structured tables using question-answer pairs as supervision. The central challenge arises from two compounding factors: the broader domain results in an open-ended set of relations, and the deeper compositionality results in a combinatorial explosion in the space of logical forms. We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. For evaluation, we created a new dataset of 22,033 complex questions on Wikipedia tables, which is made publicly available."
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., @cite_16 @cite_12 . Recent work by crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons.
|
{
"cite_N": [
"@cite_16",
"@cite_12"
],
"mid": [
"2295536541",
"2053190303"
],
"abstract": [
"The naturalness of qualitative reasoning suggests that qualitative representations might be an important component of the semantics of natural language. Prior work showed that frame-based representations of qualitative process theory constructs could indeed be extracted from natural language texts. That technique relied on the parser recognizing specific syntactic constructions, which had limited coverage. This paper describes a new approach, using narrative function to represent the higher-order relationships between the constituents of a sentence and between sentences in a discourse. We outline how narrative function combined with query-driven abduction enables the same kinds of information to be extracted from natural language texts. Moreover, we also show how the same technique can be used to extract type-level qualitative representations from text, and used to improve performance in playing a strategy game.",
"Abstract Objects move, collide, flow, bend, heat up, cool down, stretch, compress, and boil. These and other things that cause changes in objects over time are intuitively characterized as processes . To understand commonsense physical reasoning and make programs that interact with the physical world as well as people do we must understand qualitative reasoning about processes, when they will occur, their effects, and when they will stop. Qualitative process theory defines a simple notion of physical process that appears useful as a language in which to write dynamical theories. Reasoning about processes also motivates a new qualitative representation for quantity in terms of inequalities, called the quantity space . This paper describes the basic concepts of qualitative process theory, several different kinds of reasoning that can be performed with them, and discusses its implications for causal reasoning. Several extended examples illustrate the utility of the theory, including figuring out that a boiler can blow up, that an oscillator with friction will eventually stop, and how to say that you can pull with a string, but not push with it."
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., Sue had 5 cookies, then gave 2 to Joe...'') are mapped to a system of equations, e.g., @cite_36 @cite_26 @cite_11 @cite_33 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge.
|
{
"cite_N": [
"@cite_36",
"@cite_26",
"@cite_33",
"@cite_11"
],
"mid": [
"2613312549",
"2251349042",
"2250769864",
"2757276219"
],
"abstract": [
"Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
"We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers in these equations to the problem text. The learning algorithm uses varied supervision, including either full equations or just the final answers. We evaluate performance on a newly gathered corpus of algebra word problems, demonstrating that the system can correctly answer almost 70 of the questions in the dataset. This is, to our knowledge, the first learning result for this task.",
"This paper presents a semantic parsing and reasoning approach to automatically solving math word problems. A new meaning representation language is designed to bridge natural language text and math expressions. A CFG parser is implemented based on 9,600 semi-automatically created grammar rules. We conduct experiments on a test set of over 1,500 number word problems (i.e., verbally expressed number problems) and yield 95.4 precision and 60.2 recall.",
""
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
The dataset shares some structure with the Winograd Schema Challenge @cite_1 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while tests reasoning about qualitative relationships requiring tracking of coreferent worlds.''
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1599016936"
],
"abstract": [
"In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino-grad schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation."
]
}
|
1811.08048
|
2901386711
|
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL
|
Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., @cite_4 @cite_19 @cite_13 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to @cite_22 .
|
{
"cite_N": [
"@cite_13",
"@cite_19",
"@cite_4",
"@cite_22"
],
"mid": [
"2251957808",
"2612431505",
"2427527485",
"2511149293"
],
"abstract": [
"How do we build a semantic parser in a new domain starting with zero training examples? We introduce a new methodology for this setting: First, we use a simple grammar to generate logical forms paired with canonical utterances. The logical forms are meant to cover the desired set of compositional operators, and the canonical utterances are meant to capture the meaning of the logical forms (although clumsily). We then use crowdsourcing to paraphrase these canonical utterances into natural utterances. The resulting data is used to train the semantic parser. We further study the role of compositionality in the resulting paraphrases. Finally, we test our methodology on seven domains and show that we can build an adequate semantic parser in just a few hours.",
"We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23 and 40 vs. 80 ), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- this http URL",
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0 , a significant improvement over a simple baseline (20 ). However, human performance (86.8 ) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",
"We demonstrate the value of collecting semantic parse labels for knowledge base question answering. In particular, (1) unlike previous studies on small-scale datasets, we show that learning from labeled semantic parses significantly improves overall performance, resulting in absolute 5 point gain compared to learning from answers, (2) we show that with an appropriate user interface, one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers, and (3) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering."
]
}
|
1811.07222
|
2901093070
|
We focus on the problem of estimating the orientation of the ground plane with respect to a mobile monocular camera platform (e.g., ground robot, wearable camera, assistive robotic platform). To address this problem, we formulate the ground plane estimation problem as an inter-mingled multi-task prediction problem by jointly optimizing for point-wise surface normal direction, 2D ground segmentation, and depth estimates. Our proposed model -- GroundNet -- estimates the ground normal in two streams separately and then a consistency loss is applied on top of the two streams to enforce geometric consistency. A semantic segmentation stream is used to isolate the ground regions and are used to selectively back-propagate parameter updates only through the ground regions in the image. Our experiments on KITTI and ApolloScape datasets verify that the GroundNet is able to predict consistent depth and normal within the ground region. It also achieves top performance on ground plane normal estimation and horizon line detection.
|
Geometry-based methods often extract the 3D scene structure (, using multi-view cues, motion cues or depth sensors) and then the ground plane is fitted to the 3D points using a robust model fitting algorithm like RANSAC. @cite_41 identifying the ground using the 3D point cloud from LIDAR. @cite_42 obtains the video frame rate depth maps from the time-of-flight (TOF) cameras and exploits 4D spatiotemporal RANSAC for ground plane estimation. @cite_11 generates the 3D point cloud under a stereo setup and then estimates the ground plane by disparity. Assuming the scene is static, simultaneous localizing and mapping (SLAM) and structure from motion (SFM) approaches can also be used for extracting the 3D scene structure @cite_39 @cite_29 @cite_37 @cite_28 @cite_5 @cite_13 , making ground plane estimation possible.
|
{
"cite_N": [
"@cite_37",
"@cite_41",
"@cite_28",
"@cite_29",
"@cite_42",
"@cite_39",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"",
"2964196637",
"",
"1972725010",
"2128086777",
"2100993594",
"2145804667",
"2170772499"
],
"abstract": [
"",
"",
"In this paper we derive and test a probability-based weighting that can balance residuals of different types in spline fitting. In contrast to previous formulations, the proposed spline error weighting scheme also incorporates a prediction of the approximation error of the spline fit. We demonstrate the effectiveness of the prediction in a synthetic experiment, and apply it to visual-inertial fusion on rolling shutter cameras. This results in a method that can estimate 3D structure with metric scale on generic first-person videos. We also propose a quality measure for spline fitting, that can be used to automatically select the knot spacing. Experiments verify that the obtained trajectory quality corresponds well with the requested quality. Finally, by linearly scaling the weights, we show that the proposed spline error weighting minimizes the estimation errors on real sequences, in terms of scale and end-point errors.",
"",
"A fundamental problem in autonomous vehicle navigation is the identification of obstacle free space in cluttered and unstructured environments. Features such as walls, people, furniture, doors and stairs, etc are potential hazards. The approach taken in this paper is motivated by the recent development on infra-red time-of-flight cameras that provide video frame rate low resolution depth maps. We propose to exploit the temporal information content provided by the high refresh rate of such cameras to overcome the limitations due to low spatial resolution and high depth uncertainty and aim to provide robust and accurate estimates of planar surfaces in the environment. These surfaces' estimates are then used to provide statistical tests to identify obstacles and dangers in the environment. Classical 3D spatial RANSAC is extended to 4D spatio-temporal RANSAC by developing spatio-temporal models of planar surfaces that incorporate a linear motion model as well as linear environment features. A 4D-vector product is used for hypotheses generation from data that is randomly sampled across both spatial and temporal variations. The algorithm is fully posed in the spatio-temporal representation and there is no need to correlate points or hypothesis between temporal images. The proposed algorithm is computationally fast and robust for estimation of planar surfaces in general and the ground plane in particular. There are potential applications in mobile robotics, autonomous vehicular navigation, and automotive safety systems. The claims of the paper are supported by experimental results obtained from real video data for a time-of-flight range sensor mounted on an automobile navigating in an undercover parking lot.",
"We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.",
"We introduce a new approach to structure and motion recovery directly from one or more large planes in the scene. When such a plane exists, we demonstrate how to automatically detect and track it robustly and consistently over a long video sequence, and how to efficiently self-calibrate the camera using the homographies induced by this plane. We build a complete structure from motion system which does not use any additional off-the-plane information about the scene, and show its advantage over conventional systems in handling two important issues which often occur in real world videos, namely, the plane degeneracy and the dynamic foreground problems. Experimental results on a variety of real video sequences verify the effectiveness and efficiency of our system.",
"We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"Abstract Ground plane perception is of vital importance to human mobility. In order to develop a stereo-based mobility aid for the partially sighted, we model the ground plane based on disparity and analyze its uncertainty. Because the mobility aid is to be mounted on a person, the cameras will be moving around while the person is walking. By calibrating the ground plane at each frame, we show that a partial pose estimate can be recovered. Moreover, by keeping track of how the ground plane changes and analyzing the ground plane, we show that obstacles and curbs are detected. Detailed error analysis has been carried out as reliability is of utmost importance for human applications."
]
}
|
1811.07222
|
2901093070
|
We focus on the problem of estimating the orientation of the ground plane with respect to a mobile monocular camera platform (e.g., ground robot, wearable camera, assistive robotic platform). To address this problem, we formulate the ground plane estimation problem as an inter-mingled multi-task prediction problem by jointly optimizing for point-wise surface normal direction, 2D ground segmentation, and depth estimates. Our proposed model -- GroundNet -- estimates the ground normal in two streams separately and then a consistency loss is applied on top of the two streams to enforce geometric consistency. A semantic segmentation stream is used to isolate the ground regions and are used to selectively back-propagate parameter updates only through the ground regions in the image. Our experiments on KITTI and ApolloScape datasets verify that the GroundNet is able to predict consistent depth and normal within the ground region. It also achieves top performance on ground plane normal estimation and horizon line detection.
|
The secondary category methods focus on applying machine learning technique to estimate the ground plane normal either directly or implicitly from other related tasks. There are only a few prior works are direct methods. @cite_8 @cite_49 learn a classifier to classify local planar image patches and their orientations first. Then a Markov random field (MRF) model is learned to segment the image into dominant plane segments. @cite_38 achieves the ground plane recognition by learning the lighting-invariant texture feature using regularized logistic regression model. However, these methods make use of shallow learning model and do not benefit from the recent significant progress in deep learning. Also, they do not model the geometric relationship existing in the image. Our proposed method also falls into this category. To the best of our knowledge, this is the first work of direct ground plane estimation which leverages the strong capacity of the deep neural network and the geometry consistency.
|
{
"cite_N": [
"@cite_38",
"@cite_49",
"@cite_8"
],
"mid": [
"2584852980",
"2039438219",
"2081159743"
],
"abstract": [
"Augmented reality combines real footage taken of a scene with virtual elements. However, most current methods rely on camera localisation and 3D reconstruction or point cloud generation in order to integrate augmented reality to the footage. In contrast, in this work we present a novel method to augment virtual reality to the scene based on the recognition of dominant planes in interior scenes. Our method uses a rule system to select predefined decisions on a set of variables in order to infer dominant planes in the scene. For this, we propose to combine information from texture features, a measure of blurring in the dominant planes, and a segmentation of regions in a scene based on superpixels. The rule system infers the regions corresponding to the dominant planes, whilst light intensity in the scene is inferred from segmented regions. We also propose an approach to remove regions misclassified as dominant planes. Finally, the floor in the scene is recognised as the most dominant plane and then replaced with an augmented texture. We demonstrate our approach in a video sequence where our method is applied in a frame-to-frame basis, thus, for each single image in the video sequence, the floor is automatically recognised as the most dominant plane and then replaced with a virtual texture and furthermore, whose appearance is modified according to our coarse light model inferred from our approach.",
"We present a novel method to recognise planar structures in a single image and estimate their 3D orientation. This is done by exploiting the relationship between image appearance and 3D structure, using machine learning methods with supervised training data. As such, the method does not require specific features or use geometric cues, such as vanishing points. We employ general feature representations based on spatiograms of gradients and colour, coupled with relevance vector machines for classification and regression. We first show that using hand-labelled training data, we are able to classify pre-segmented regions as being planar or not, and estimate their 3D orientation. We then incorporate the method into a segmentation algorithm to detect multiple planar structures from a previously unseen image.",
"We propose an algorithm to detect planes in a single image of an outdoor urban scene, capable of identifying multiple distinct planes, and estimating their orientation. Using machine learning techniques, we learn the relationship between appearance and structure from a large set of labelled examples. Plane detection is achieved by classifying multiple overlapping image regions, in order to obtain an initial estimate of planarity for a set of points, which are segmented into planar and non-planar regions using a sequence of Markov random fields. This differs from previous methods in that it does not rely on line detection, and is able to predict an actual orientation for planes. We show that the method is able to reliably extract planes in a variety of scenes, and compares favourably with existing methods."
]
}
|
1811.07222
|
2901093070
|
We focus on the problem of estimating the orientation of the ground plane with respect to a mobile monocular camera platform (e.g., ground robot, wearable camera, assistive robotic platform). To address this problem, we formulate the ground plane estimation problem as an inter-mingled multi-task prediction problem by jointly optimizing for point-wise surface normal direction, 2D ground segmentation, and depth estimates. Our proposed model -- GroundNet -- estimates the ground normal in two streams separately and then a consistency loss is applied on top of the two streams to enforce geometric consistency. A semantic segmentation stream is used to isolate the ground regions and are used to selectively back-propagate parameter updates only through the ground regions in the image. Our experiments on KITTI and ApolloScape datasets verify that the GroundNet is able to predict consistent depth and normal within the ground region. It also achieves top performance on ground plane normal estimation and horizon line detection.
|
Also, one can estimate the ground plane implicitly by solving a related task. One such example of task is 3D surface layout recovery @cite_0 @cite_34 @cite_15 . These methods are capable of creating a simple indoor layout reconstruction from a single image and then the ground plane normal is able to be estimated. Another example task could be monocular surface normal estimation @cite_31 @cite_6 @cite_24 @cite_21 @cite_43 @cite_26 @cite_3 @cite_12 , which usually formulate the problem as a dense pixel-wise prediction problem and learn a feed-forward deep neural network classification model. On top of the estimated pixel-wise normals, one can group the pixels with similar normals into the dominant planes and then compute the average normal for each plane. Although these methods achieve significant progress, most of them are tailed only for the indoor scene. In addition, since they do not explicitly estimate the plane normal, the fact that the ground plane is often a flat and smooth surface is ignored. In contrast, we parameterize the output of our methods to be a planar surface normal explicitly and in the meantime leverage the successful architecture design from the surface normal estimation methods.
|
{
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_43",
"@cite_15",
"@cite_34",
"@cite_12"
],
"mid": [
"2768591600",
"2124907686",
"2962809185",
"",
"2952623155",
"2125310925",
"2593915460",
"2951713345",
"2794800539",
"566730006",
"2550402137"
],
"abstract": [
"In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multitask learning requires annotations for multiple properties of the same training instance, we look to synthetic images to train our network. To overcome the domain difference between real and synthetic data, we employ an unsupervised feature space domain adaptation method based on adversarial learning. Given an input synthetic RGB image, our network simultaneously predicts its surface normal, depth, and instance contour, while also minimizing the feature space domain differences between real and synthetic data. Through extensive experiments, we demonstrate that our network learns more transferable representations compared to single-task baselines. Our learned representation produces state-of-the-art transfer learning results on PASCAL VOC 2007 classification and 2012 detection.",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches.",
"",
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.",
"Humans have an amazing ability to instantly grasp the overall 3D structure of a scene--ground orientation, relative positions of major landmarks, etc.--even from a single image. This ability is completely missing in most popular recognition algorithms, which pretend that the world is flat and or view it through a patch-sized peephole. Yet it seems very likely that having a grasp of this \"surface layout\" of a scene should be of great assistance for many tasks, including recognition, navigation, and novel view synthesis. In this paper, we take the first step towards constructing the surface layout, a labeling of the image intogeometric classes. Our main insight is to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region. Our multiple segmentation framework provides robust spatial support, allowing a wide variety of cues (e.g., color, texture, and perspective) to contribute to the confidence in each geometric label. In experiments on a large set of outdoor images, we evaluate the impact of the individual cues and design choices in our algorithm. We further demonstrate the applicability of our method to indoor images, describe potential applications, and discuss extensions to a more complete notion of surface layout.",
"We explore design principles for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to (1) add diversity during batch updates, speeding up learning; (2) explore complex nonlinear predictors, improving accuracy; and (3) efficiently train state-of-the-art models tabula rasa (i.e., \"from scratch\") for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. L-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet, but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general Manhattan layouts.",
"The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360° full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.",
"This paper introduces an approach to regularize 2.5D surface normal and depth predictions at each pixel given a single input image. The approach infers and reasons about the underlying 3D planar surfaces depicted in the image to snap predicted normals and depths to inferred planar surfaces, all while maintaining fine detail within objects. Our approach comprises two components: (i) a four-stream convolutional neural network (CNN) where depths, surface normals, and likelihoods of planar region and planar boundary are predicted at each pixel, followed by (ii) a dense conditional random field (DCRF) that integrates the four predictions such that the normals and depths are compatible with each other and regularized by the planar region and planar boundary information. The DCRF is formulated such that gradients can be passed to the surface normal and depth CNNs via backpropagation. In addition, we propose new planar-wise metrics to evaluate geometry consistency within planar surfaces, which are more tightly related to dependent 3D editing applications. We show that our regularization yields a 30 relative improvement in planar consistency on the NYU v2 dataset [24]."
]
}
|
1811.07344
|
2963980935
|
In this project, competition-winning deep neural networks with pretrained weights are used for image-based gender recognition and age estimation. Transfer learning is explored using both VGG19 and VGGFace pretrained models by testing the effects of changes in various design schemes and training parameters in order to improve prediction accuracy. Training techniques such as input standardization, data augmentation, and label distribution age encoding are compared. Finally, a hierarchy of deep CNNs is tested that first classifies subjects by gender, and then uses separate male and female age models to predict age. A gender recognition accuracy of 98.7 and an MAE of 4.1 years is achieved. This paper shows that, with proper training techniques, good results can be obtained by retasking existing convolutional filters towards a new purpose.
|
In a modern interconnected information society, it is critical to identify or verify individuals accurately at real-time. Due to its significant role in human computer interaction (HCI), internet access control, and security control and surveillance, face-based demographical research has attracted great attention in both research communities and industries @cite_25 . MORPH-II @cite_7 has been the subject of many studies concerning age and gender estimation. As such, it is a good way to compare the efficacy of different techniques. @cite_2 gauges human age estimation by crowd-sourcing estimates on two popular face-image databases. They found estimates on the FG-NET dataset to be off by an average of 4.7 years. They mention that that number might be low because it is easy to guess the ages of babies and children without much variation in predictions. In fact, the average age error on FG-NET subjects older than 15 is 7.4 years, which is similar to the human error of 7.2 years that was calculated on the PSCO dataset. In the same study, use a hierarchy of support vector machines (SVMs) and biologically-inspired features (BIFs) to obtain an average age estimation error of 4.2 years on the MORPH-II database.
|
{
"cite_N": [
"@cite_25",
"@cite_7",
"@cite_2"
],
"mid": [
"2105026179",
"2118664399",
"2036565334"
],
"abstract": [
"Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.",
"This paper details MORPH a longitudinal face database developed for researchers investigating all facets of adult age-progression, e.g. face modeling, photo-realistic animation, face recognition, etc. This database contributes to several active research areas, most notably face recognition, by providing: the largest set of publicly available longitudinal images; longitudinal spans from a few months to over twenty years; and, the inclusion of key physical parameters that affect aging appearance. The direct contribution of this data corpus for face recognition is highlighted in the evaluation of a standard face recognition algorithm, which illustrates the impact that age-progression, has on recognition rates. Assessment of the efficacy of this algorithm is evaluated against the variables of gender and racial origin. This work further concludes that the problem of age-progression on face recognition (FR) is not unique to the algorithm used in this work.",
"There has been a growing interest in automatic age estimation from facial images due to a variety of potential applications in law enforcement, security control, and human-computer interaction. However, despite advances in automatic age estimation, it remains a challenging problem. This is because the face aging process is determined not only by intrinsic factors, e.g. genetic factors, but also by extrinsic factors, e.g. lifestyle, expression, and environment. As a result, different people with the same age can have quite different appearances due to different rates of facial aging. We propose a hierarchical approach for automatic age estimation, and provide an analysis of how aging influences individual facial components. Experimental results on the FG-NET, MORPH Album2, and PCSO databases show that eyes and nose are more informative than the other facial components in automatic age estimation. We also study the ability of humans to estimate age using data collected via crowdsourcing, and show that the cumulative score (CS) within 5-year mean absolute error (MAE) of our method is better than the age estimates provided by humans."
]
}
|
1811.07344
|
2963980935
|
In this project, competition-winning deep neural networks with pretrained weights are used for image-based gender recognition and age estimation. Transfer learning is explored using both VGG19 and VGGFace pretrained models by testing the effects of changes in various design schemes and training parameters in order to improve prediction accuracy. Training techniques such as input standardization, data augmentation, and label distribution age encoding are compared. Finally, a hierarchy of deep CNNs is tested that first classifies subjects by gender, and then uses separate male and female age models to predict age. A gender recognition accuracy of 98.7 and an MAE of 4.1 years is achieved. This paper shows that, with proper training techniques, good results can be obtained by retasking existing convolutional filters towards a new purpose.
|
In this paper, transfer learning is employed to tackle the problem of recognizing a person's age and gender from an image using deep CNNs. A variety of network designs and training techniques are explored. We consider dynamic LDAE, which outperforms the static LDAE considered in @cite_4 . A gender-specified hierarchical age model is proposed in this study. Experimental results demonstrate its effectiveness over the general age model.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2731793484"
],
"abstract": [
"Abstract Convolutional Neural Networks (CNNs) have been proven very effective for human demographics estimation by a number of recent studies. However, the proposed solutions significantly vary in different aspects leaving many open questions on how to choose an optimal CNN architecture and which training strategy to use. In this work, we shed light on some of these questions improving the existing CNN-based approaches for gender and age prediction and providing practical hints for future studies. In particular, we analyse four important factors of the CNN training for gender recognition and age estimation: (1) the target age encoding and loss function, (2) the CNN depth, (3) the need for pretraining, and (4) the training strategy: mono-task or multi-task. As a result, we design the state-of-the-art gender recognition and age estimation models according to three popular benchmarks: LFW, MORPH-II and FG-NET . Moreover, our best model won the ChaLearn Apparent Age Estimation Challenge 2016 significantly outperforming the solutions of other participants."
]
}
|
1811.07407
|
2901192153
|
Humans make accurate decisions by interpreting complex data from multiple sources. Medical diagnostics, in particular, often hinge on human interpretation of multi-modal information. In order for artificial intelligence to make progress in automated, objective, and accurate diagnosis and prognosis, methods to fuse information from multiple medical imaging modalities are required. However, combining information from multiple data sources has several challenges, as current deep learning architectures lack the ability to extract useful representations from multimodal information, and often simple concatenation is used to fuse such information. In this work, we propose Multimodal DenseNet, a novel architecture for fusing multimodal data. Instead of focusing on concatenation or early and late fusion, our proposed architectures fuses information over several layers and gives the model flexibility in how it combines information from multiple sources. We apply this architecture to the challenge of polyp characterization and landmark identification in endoscopy. Features from white light images are fused with features from narrow band imaging or depth maps. This study demonstrates that Multimodal DenseNet outperforms monomodal classification as well as other multimodal fusion techniques by a significant margin on two different datasets.
|
Multimodal machine learning is also used for cross-modal data synthesis. These techniques are particularly prevalent in the medical field, as there are significant cost and privacy barriers in collecting medical data. For example, Huang and Vemulapalli were able to produce synthetic T2 weighted MRI images of the brain from T1 weighted MRI images and vice versa @cite_8 .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2797650215"
],
"abstract": [
"Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL"
]
}
|
1811.07506
|
2901901719
|
Multi-robot localization has been a critical problem for robots performing complex tasks cooperatively. In this paper, we propose a decentralized approach to localize a group of robots in a large featureless environment. The proposed approach only requires that at least one robot remains stationary as a temporary landmark during a certain period of time. The novelty of our approach is threefold: (1) developing a decentralized scheme that each robot calculates their own state and only stores the latest one to reduce storage and computational cost, (2) developing an efficient localization algorithm through the extended Kalman filter (EKF) that only uses observations of relative pose to estimate the robot positions, (3) developing a scheme has less requirements on landmarks and more robustness against insufficient observations. Various simulations and experiments using five robots equipped with relative pose-measurement sensors are performed to validate the superior performance of our approach.
|
To estimate the state of robots with high accuracy and consistency, various filter algorithms have been tested for CL. An EKF-based algorithm for heterogeneous outdoor multi-robot localization is described in @cite_15 , where individual robot equipped with encoders and a camera maintains an estimated pose by using sensor data fusion through EKF. An improved EKF on CL is studied in @cite_4 . Additionally, a recent approach that called recursive decentralized localization based on EKF is presented in @cite_8 . The proposed algorithm approximates the inter-robot correlations and performs with asynchronous pairwise communication. Other algorithms such as particle filter (PF) @cite_20 and maximum a posteriori (MAP) @cite_21 are also extensively studied for CL. The main limitation of these approaches is that in certain case, none of the robots in group has an access to the absolute state information that will reduce the estimation accuracy and consistency. In our approach, the stationary robot at each time can improve the accuracy efficiently by an optimized EKF algorithm.
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_15",
"@cite_20"
],
"mid": [
"2138400911",
"2793866255",
"2146702612",
"",
"2170229019"
],
"abstract": [
"In this paper we consider the problem of simultaneously localizing all members of a team of robots. Each robot is equipped with proprioceptive sensors and exteroceptive sensors. The latter provide relative observations between the robots. Proprioceptive and exteroceptive data are fused with an Extended Kalman Filter. We derive the equations for this estimator for the most general relative observation between two robots. Then we consider three special cases of relative observations and we present the structure of the filter for each case. Finally, we study the performance of the approach through many accurate simulations.",
"This paper provides a fully decentralized algorithm for collaborative localization based on the extended Kalman filter. The major challenge in decentralized collaborative localization is to track i...",
"This paper presents a distributed Maximum A Posteriori (MAP) estimator for multi-robot Cooperative Localization (CL). As opposed to centralized MAP-based CL, the proposed algorithm reduces the memory and processing requirements by distributing data and computations amongst the robots. Specifically, a distributed data-allocation scheme is presented that enables robots to simultaneously process and update their local data. Additionally, a distributed Conjugate Gradient algorithm is employed that reduces the cost of computing the MAP estimates, while utilizing all available resources in the team and increasing robustness to single-point failures. Finally, a computationally efficient distributed marginalization of past robot poses is introduced for limiting the size of the optimization problem. The communication and computational complexity of the proposed algorithm is described in detail, while extensive simulation studies are presented for validating the performance of the distributed MAP estimator and comparing its accuracy to that of existing approaches.",
"",
"This paper describes an on-line algorithm for multi-robot simultaneous localization and mapping (SLAM). The starting point is the single-robot Rao-Blackwellized particle filter described by , and three key generalizations are made. First, the particle filter is extended to handle multi-robot SLAM problems in which the initial pose of the robots is known (such as occurs when all robots start from the same location). Second, an approximation is introduced to solve the more general problem in which the initial pose of robots is not known a priori (such as occurs when the robots start from widely separated locations). In this latter case, it is assumed that pairs of robots will eventually encounter one another, thereby determining their relative pose. This relative attitude is used to initialize the filter, and subsequent observations from both robots are combined into a common map. Third and finally, a method is introduced to integrate observations collected prior to the first robot encounter, using the notion of a virtual robot travelling backwards in time. This novel approach allows one to integrate all data from all robots into a single common map."
]
}
|
1811.07502
|
2901306299
|
Deep learning object detectors achieve state-of-the-art accuracy at the expense of high computational overheads, impeding their utilization on embedded systems such as drones. A primary source of these overheads is the exhaustive classification of typically 10^4-10^5 regions per image. Given that most of these regions contain uninformative background, the detector designs seem extremely superfluous and inefficient. In contrast, biological vision systems leverage selective attention for fast and efficient object detection. Recent neuroscientific findings shedding new light on the mechanism behind selective attention allowed us to formulate a new hypothesis of object detection efficiency and subsequently introduce a new object detection paradigm. To that end, we leverage this knowledge to design a novel region proposal network and empirically show that it achieves high object detection performance on the COCO dataset. Moreover, the model uses two to three orders of magnitude fewer computations than state-of-the-art models and consequently achieves inference speeds exceeding 500 frames s, thereby making it possible to achieve object detection on embedded systems.
|
The sliding-window approach was the leading detection paradigm in classic object detection. However, with the resurgence of deep learning @cite_0 , two-stage detectors quickly came to dominate object detection. As pioneered in the Selective Search work @cite_20 , the first stage generates a sparse set of ideally object-only candidate proposals while filtering out the majority of negative locations @cite_29 , while the second stage classifies the proposals into object-category classes. Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework @cite_10 , of which numerous extensions have been proposed, @cite_26 @cite_2 @cite_8 @cite_18 @cite_3 . Nevertheless, while two-stage detectors achieved unprecedented accuracies, they were slow. The need to improve speed ushered in the development of one-stage detectors, such as SSD @cite_30 and YOLO @cite_6 @cite_28 .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2341497066",
"",
"2949533892",
"2951433694",
"1958328135",
"2572745118",
"",
"",
"2949650786",
"2953106684",
"2088049833"
],
"abstract": [
"",
"The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively.",
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.",
"Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.",
"In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and low-level features. The proposed TDM architecture provides a significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16, 35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any bells and whistles (e.g., multi-scale, iterative box refinement, etc.).",
"",
"",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
}
|
1811.07502
|
2901306299
|
Deep learning object detectors achieve state-of-the-art accuracy at the expense of high computational overheads, impeding their utilization on embedded systems such as drones. A primary source of these overheads is the exhaustive classification of typically 10^4-10^5 regions per image. Given that most of these regions contain uninformative background, the detector designs seem extremely superfluous and inefficient. In contrast, biological vision systems leverage selective attention for fast and efficient object detection. Recent neuroscientific findings shedding new light on the mechanism behind selective attention allowed us to formulate a new hypothesis of object detection efficiency and subsequently introduce a new object detection paradigm. To that end, we leverage this knowledge to design a novel region proposal network and empirically show that it achieves high object detection performance on the COCO dataset. Moreover, the model uses two to three orders of magnitude fewer computations than state-of-the-art models and consequently achieves inference speeds exceeding 500 frames s, thereby making it possible to achieve object detection on embedded systems.
|
Both one-stage and two-stage object detection methods typically evaluate @math candidate regions per image; densely covering many different spatial positions, scales, and aspect ratios. In the current state-of-the-art one-stage detector, RetinaNet @cite_33 , evaluation ( predicting the probability of object presence) of each of these regions is carried by a classification subnet, which is a fully-convolutional neural network comprising five convolutional layers, each with typically 256 filters and each followed by ReLU activations. Input images into these networks are typically re-scaled to @math pixels, from which @math candidate regions are individually evaluated. Therefore, significant computational costs are incurred. Consequently, the improved speed of one-stage detectors comes at a significant computational cost, which makes them impractical for embedded systems. Moreover, due to the extreme background-to-object class-imbalance in typical images, the exhaustive region classification design seems extremely superfluous and inefficient.
|
{
"cite_N": [
"@cite_33"
],
"mid": [
"2743473392"
],
"abstract": [
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL"
]
}
|
1811.07502
|
2901306299
|
Deep learning object detectors achieve state-of-the-art accuracy at the expense of high computational overheads, impeding their utilization on embedded systems such as drones. A primary source of these overheads is the exhaustive classification of typically 10^4-10^5 regions per image. Given that most of these regions contain uninformative background, the detector designs seem extremely superfluous and inefficient. In contrast, biological vision systems leverage selective attention for fast and efficient object detection. Recent neuroscientific findings shedding new light on the mechanism behind selective attention allowed us to formulate a new hypothesis of object detection efficiency and subsequently introduce a new object detection paradigm. To that end, we leverage this knowledge to design a novel region proposal network and empirically show that it achieves high object detection performance on the COCO dataset. Moreover, the model uses two to three orders of magnitude fewer computations than state-of-the-art models and consequently achieves inference speeds exceeding 500 frames s, thereby making it possible to achieve object detection on embedded systems.
|
Inspired by the promise of better region proposal efficiency in natural vision, researchers used saliency-based models to generate object-only region proposals for object detection @cite_13 @cite_9 @cite_4 @cite_23 @cite_16 . Their main motivation was that a saliency map, generated non-exhaustively, could highlight regions containing objects, which can then be proposed to an object-category classifier, thereby ignoring background regions altogether and potentially saving thousands of unnecessary classifications. Nevertheless, a primary shortcoming of these previous attempts is that most models used high-resolution color ( @math pixels; RGB @cite_9 ) images, which results in the overall detection model still being more computationally expensive and resource demanding than state-of-the-art one- and two-stage detectors. Furthermore, other studies ( @cite_34 ) used saliency models trained on human eye fixations. A problem with this approach is that not all objects of interest are detected; just objects that grab human attention, which is inadequate for general object detection.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_34",
"@cite_23",
"@cite_16",
"@cite_13"
],
"mid": [
"1993208111",
"2783847366",
"2796972250",
"",
"2098493135",
""
],
"abstract": [
"Selective visual attention plays an important role in human visual system. In real life, human visual system cannot handle all of the visual information captured by eyes on time. Selective visual attention filters the visual information and selects interesting one for further processing such as object detection. Inspired by this mechanism, we construct an object detection method which can speed up the object detection relative to the methods that search objects by using sliding window. This method firstly extracts saliency map from the origin image, and then gets the candidate detection area from the saliency map by adaptive thresholds. To detect object, we only need to search the candidate detection area with the deformable part model. Since the candidate detection area is much smaller than the whole image, we can speed up the object detection. We evaluate the detection performance of our approach on PASCAL 2008 dataset, INRIA person dataset and Caltech 101 dataset, and the results indicate that our method can speed up the detection without decline in detection accuracy.",
"Robust sensing of the environment is fundamental for driver assistance systems performing safe maneuvers. While approaches to object detection have experienced tremendous improvements since the introduction and combination of region proposal and convolutional neural networks in one framework, the detection of distant objects occupying just a few pixels in images can be challenging though. The convolutional and pooling layers reduce the image information to feature maps; yet, relevant information may be lost through pooling and convolution for small objects. In order to address this challenge, a new approach to proposing regions is presented that extends the architecture of a region proposal network by incorporating priors to guide the proposals towards regions containing potential target objects. Moreover, inspired by the concept of saliency, a saliency-based prior is chosen to guide the RPN towards important regions in order to make efficient use of differences between objects and background in an unsupervised fashion. This allows the network not only to consider local information provided by the convolutional layers, but also to take into account global information provided by the saliency priors. Experimental results based on a distant vehicle dataset and different configurations including three priors show that the incorporation of saliency-inspired priors into a region proposal network can improve its performance significantly.",
"In this paper we address the problem of unsupervised localization of objects in single images. Compared to previous state-of-the-art method our method is fully unsupervised in the sense that there is no prior instance level or category level information about the image. Furthermore, we treat each image individually and do not rely on any neighboring image similarity. We employ deep-learning based generation of saliency maps and region proposals to tackle this problem. First salient regions in the image are determined using an encoder decoder architecture. The resulting saliency map is matched with region proposals from a class agnostic region proposal network to roughly localize the candidate object regions. These regions are further refined based on the overlap and similarity ratios. Our experimental evaluations on a benchmark dataset show that the method gets close to current state-of-the-art methods in terms of localization accuracy even though these make use of multiple frames. Furthermore, we created a more challenging and realistic dataset with multiple object categories and varying viewpoint and illumination conditions for evaluating the method's performance in real world scenarios.",
"",
"Discovering object classes from images in a fully unsupervised way is an intrinsically ambiguous task; saliency detection approaches however ease the burden on unsupervised learning. We develop an algorithm for simultaneously localizing objects and discovering object classes via bottom-up (saliency-guided) multiple class learning (bMCL), and make the following contributions: (1) saliency detection is adopted to convert unsupervised learning into multiple instance learning, formulated as bottom-up multiple class learning (bMCL); (2) we utilize the Discriminative EM (DiscEM) to solve our bMCL problem and show DiscEM's connection to the MIL-Boost method[34]; (3) localizing objects, discovering object classes, and training object detectors are performed simultaneously in an integrated framework; (4) significant improvements over the existing methods for multi-class object discovery are observed. In addition, we show single class localization as a special case in our bMCL framework and we also demonstrate the advantage of bMCL over purely data-driven saliency methods.",
""
]
}
|
1811.07252
|
2901300216
|
We propose a new iris presentation attack detection method using three-dimensional features of an observed iris region estimated by photometric stereo. Our implementation uses a pair of iris images acquired by a common commercial iris sensor (LG 4000). No hardware modifications of any kind are required. Our approach should be applicable to any iris sensor that can illuminate the eye from two different directions. Each iris image in the pair is captured under near-infrared illumination at a different angle relative to the eye. Photometric stereo is used to estimate surface normal vectors in the non-occluded portions of the iris region. The variability of the normal vectors is used as the presentation attack detection score. This score is larger for a texture that is irregularly opaque and printed on a convex contact lens, and is smaller for an authentic iris texture. Thus the problem is formulated as binary classification into (a) an eye wearing textured contact lens and (b) the texture of an actual iris surface (possibly seen through a clear contact lens). Experiments were carried out on a database of approx. 2,900 iris image pairs acquired from approx. 100 subjects. Our method was able to correctly classify over 95 of samples when tested on contact lens brands unseen in training, and over 98 of samples when the contact lens brand was seen during training. The source codes of the method are made available to other researchers.
|
Iris presentation attack detection has received considerable attention, especially in recent years. A recent survey by Czajka and Bowyer provides detailed summary of the research to date in iris PAD @cite_15 . Below we discuss the iris PAD approaches based on three-dimensional features that are most closely related to the proposed method. In each case we explain how our new approach is different and improves on the known technique.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2964009128"
],
"abstract": [
"Iris recognition is increasingly used in large-scale applications. As a result, presentation attack detection for iris recognition takes on fundamental importance. This survey covers the diverse research literature on this topic. Different categories of presentation attack are described and placed in an application-relevant framework, and the state of the art in detecting each category of attack is summarized. One conclusion from this is that presentation attack detection for iris recognition is not yet a solved problem. Datasets available for research are described, research directions for the near- and medium-term future are outlined, and a short list of recommended readings is suggested."
]
}
|
1811.07252
|
2901300216
|
We propose a new iris presentation attack detection method using three-dimensional features of an observed iris region estimated by photometric stereo. Our implementation uses a pair of iris images acquired by a common commercial iris sensor (LG 4000). No hardware modifications of any kind are required. Our approach should be applicable to any iris sensor that can illuminate the eye from two different directions. Each iris image in the pair is captured under near-infrared illumination at a different angle relative to the eye. Photometric stereo is used to estimate surface normal vectors in the non-occluded portions of the iris region. The variability of the normal vectors is used as the presentation attack detection score. This score is larger for a texture that is irregularly opaque and printed on a convex contact lens, and is smaller for an authentic iris texture. Thus the problem is formulated as binary classification into (a) an eye wearing textured contact lens and (b) the texture of an actual iris surface (possibly seen through a clear contact lens). Experiments were carried out on a database of approx. 2,900 iris image pairs acquired from approx. 100 subjects. Our method was able to correctly classify over 95 of samples when tested on contact lens brands unseen in training, and over 98 of samples when the contact lens brand was seen during training. The source codes of the method are made available to other researchers.
|
Another idea employing 3D features of the eye is based on detection of the Purkinje reflections , specularities that occur at the outer and inner boundaries of the cornea, and the outer and inner boundaries of the lens. Lee al @cite_2 follow this idea and apply a human eye model to calculate a theoretical positions of Purkinje reflections used later to verify the correctness of the observed specularities. The method proposed in this paper is different from the above method since it does not detect and use Purkinje reflections in the PAD. Also, the higher-order Purkinje reflections can be difficult to detect and may still exist for a person wearing contact lenses.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1534776754"
],
"abstract": [
"Fake iris detection is to detect and defeat a fake (forgery) iris image input. To solve the problems of previous researches on fake iris detection, we propose the new method of detecting fake iris attack based on the Purkinje image. Especially, we calculated the theoretical positions and distances between the Purkinje images based on the human eye model and the performance of fake detection algorithm could be much enhanced by such information. Experimental results showed that the FAR (False Acceptance Rate for accepting fake iris as live one) was 0.33 and FRR(False Rejection Rate of rejecting live iris as fake one) was 0.33 ."
]
}
|
1811.07252
|
2901300216
|
We propose a new iris presentation attack detection method using three-dimensional features of an observed iris region estimated by photometric stereo. Our implementation uses a pair of iris images acquired by a common commercial iris sensor (LG 4000). No hardware modifications of any kind are required. Our approach should be applicable to any iris sensor that can illuminate the eye from two different directions. Each iris image in the pair is captured under near-infrared illumination at a different angle relative to the eye. Photometric stereo is used to estimate surface normal vectors in the non-occluded portions of the iris region. The variability of the normal vectors is used as the presentation attack detection score. This score is larger for a texture that is irregularly opaque and printed on a convex contact lens, and is smaller for an authentic iris texture. Thus the problem is formulated as binary classification into (a) an eye wearing textured contact lens and (b) the texture of an actual iris surface (possibly seen through a clear contact lens). Experiments were carried out on a database of approx. 2,900 iris image pairs acquired from approx. 100 subjects. Our method was able to correctly classify over 95 of samples when tested on contact lens brands unseen in training, and over 98 of samples when the contact lens brand was seen during training. The source codes of the method are made available to other researchers.
|
An idea based on photometric stereo approach and illumination from different directions was proposed by Lee and Park @cite_14 . They use the fact that the surface of a live iris is not perfectly flat and so will cast shadows when illuminated from different directions. In turn, a flat iris printout does not cast shadows. Properties of the surface estimated by photometric stereo method are then used to distinguish between a live iris and a flat printout. The method proposed in this paper is different from the above method since it is designed for detection of textured contact lenses (not paper printouts) and it makes different assumptions on three-dimensional properties of the observed objects. In our method, and for image resolutions used in commercial systems, we assume that iris is more flat than the artifacts (textured contact lenses) we want to detect. This is an opposite assumption to the one made by Lee and Park, and the photometric stereo is applied in different ways in both methods.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2047042081"
],
"abstract": [
"A new fake iris detection method based on 3D feature of iris pattern is proposed. In pervious researches, they did not consider 3D structure of iris pattern, but only used 2D features of iris image. However, in our method, by using four near infra-red (NIR) illuminators attached on the left and right sides of iris camera, we could obtain the iris image in which the 3D structure of iris pattern could be shown distinctively. Based on that, we could determine the live or fake iris by wavelet analysis of the 3D feature of iris pattern. Experimental result showed that the Equal Error Rate (EER) of determining the live or fake iris was 0.33p. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 162–166, 2010"
]
}
|
1811.07426
|
2901535455
|
We demonstrate a conditional autoregressive pipeline for efficient music recomposition, based on methods presented in van den (2017). Recomposition (Casal & Casey, 2010) focuses on reworking existing musical pieces, adhering to structure at a high level while also re-imagining other aspects of the work. This can involve reuse of pre-existing themes or parts of the original piece, while also requiring the flexibility to generate new content at different levels of granularity. Applying the aforementioned modeling pipeline to recomposition, we show diverse and structured generation conditioned on chord sequence annotations.
|
Autoregressive models have proven to be powerful distribution estimators for images and sequence data, showing excellent results in generative settings . They have also performed well in related prior work for polyphonic music generation @cite_0 . Most related to the work described in this paper is CoCoNet , which also uses an autoregressive convolutional model over image-like structures for polyphonic music generation and was a direct inspiration for our approach. One key difference of our approach is our utilization of a two stage pipeline (first seen in the work of ) which greatly improves training and generation speed as well as creating an implicit separation between local voice agreement (first stage) and global consistency over measures (second stage).
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2949650786"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
}
|
1811.07483
|
2901503500
|
Recently unpaired multi-domain image-to-image translation has attracted great interests and obtained remarkable progress, where a label vector is utilized to indicate multi-domain information. In this paper, we propose SAT (Show, Attend and Translate), an unified and explainable generative adversarial network equipped with visual attention that can perform unpaired image-to-image translation for multiple domains. By introducing an action vector, we treat the original translation tasks as problems of arithmetic addition and subtraction. Visual attention is applied to guarantee that only the regions relevant to the target domains are translated. Extensive experiments on a facial attribute dataset demonstrate the superiority of our approach and the generated attention masks better explain what SAT attends when translating images.
|
. Generative Adversarial Nets (GANs) @cite_21 are a powerful method for training generative models of complicate data and have been proven effective in a wide variety of applications, including image generation @cite_18 @cite_22 @cite_19 , image-to-image translation @cite_4 @cite_16 @cite_1 , image super-resolution @cite_17 and so on. Typically a GAN model consists of a ( @math ) and a ( @math ) playing a two-player game, where @math tries to synthesize fake samples from random noises following a prior distribution, while @math learns to distinguish those from real ones. The two roles combat with each other and finally reach a , where the generator is able to produce indistinguishable fake samples of high qualities.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_19",
"@cite_16",
"@cite_17"
],
"mid": [
"2963684088",
"",
"",
"2099471712",
"2768626898",
"2962879692",
"2962793481",
"2963470893"
],
"abstract": [
"Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"",
"",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method."
]
}
|
1811.07483
|
2901503500
|
Recently unpaired multi-domain image-to-image translation has attracted great interests and obtained remarkable progress, where a label vector is utilized to indicate multi-domain information. In this paper, we propose SAT (Show, Attend and Translate), an unified and explainable generative adversarial network equipped with visual attention that can perform unpaired image-to-image translation for multiple domains. By introducing an action vector, we treat the original translation tasks as problems of arithmetic addition and subtraction. Visual attention is applied to guarantee that only the regions relevant to the target domains are translated. Extensive experiments on a facial attribute dataset demonstrate the superiority of our approach and the generated attention masks better explain what SAT attends when translating images.
|
. Several work are devoted to controlling certain details of the generated images by introducing additional supervisions, which can be a multi-hot label vector indicating the existences of some target attributes @cite_9 @cite_20 , or a textual sentence describing the desired content to generate @cite_22 @cite_15 @cite_5 . The Auxiliary Classifier GANs (ACGAN) @cite_20 belongs to the former group and the label vector conveys semantic implications such as gender and hair color in a facial synthesis task. @math is also enhanced with an auxiliary classifier that learns to infer the most appropriate label for any real or fake sample. Based on the label vector, we further propose the action vector, which is more intuitive and explainable for image-to-image translation.
|
{
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_5",
"@cite_15",
"@cite_20"
],
"mid": [
"",
"2125389028",
"2771088323",
"2766091292",
"2548275288"
],
"abstract": [
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images.",
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data."
]
}
|
1811.07483
|
2901503500
|
Recently unpaired multi-domain image-to-image translation has attracted great interests and obtained remarkable progress, where a label vector is utilized to indicate multi-domain information. In this paper, we propose SAT (Show, Attend and Translate), an unified and explainable generative adversarial network equipped with visual attention that can perform unpaired image-to-image translation for multiple domains. By introducing an action vector, we treat the original translation tasks as problems of arithmetic addition and subtraction. Visual attention is applied to guarantee that only the regions relevant to the target domains are translated. Extensive experiments on a facial attribute dataset demonstrate the superiority of our approach and the generated attention masks better explain what SAT attends when translating images.
|
. There is a large body of literature dedicated to image-to-image translation with impressive progress. For example, pix2pix @cite_4 proposes an unified architecture for paired image-to-image translation based on cGAN @cite_9 and a L1 reconstruction loss. To alleviate the costs for obtaining paired data, the problem of unpaired image-to-image translation has also been widely exploited @cite_16 @cite_3 @cite_10 , which mainly focuses on translating images between two domains. For the more challenge task of multi-domain image-to-image translation, StarGAN @cite_1 combines the ideas of @cite_16 with @cite_20 and can robustly translate a given image to multiple target domains with only a single generator. In this paper, we dive deeper into this issue and integrate several novel improvements.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2125389028",
"2768626898",
"2598581049",
"2962793481",
"",
"2548275288"
],
"abstract": [
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"",
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data."
]
}
|
1811.07483
|
2901503500
|
Recently unpaired multi-domain image-to-image translation has attracted great interests and obtained remarkable progress, where a label vector is utilized to indicate multi-domain information. In this paper, we propose SAT (Show, Attend and Translate), an unified and explainable generative adversarial network equipped with visual attention that can perform unpaired image-to-image translation for multiple domains. By introducing an action vector, we treat the original translation tasks as problems of arithmetic addition and subtraction. Visual attention is applied to guarantee that only the regions relevant to the target domains are translated. Extensive experiments on a facial attribute dataset demonstrate the superiority of our approach and the generated attention masks better explain what SAT attends when translating images.
|
. Attention based models have demonstrated significant performances in a wide range of applications, including neural machine translation @cite_23 , image captioning @cite_11 , image generation @cite_8 and so on. @cite_0 utilizes the visual attention to localize the domain-related regions for facial attribute editing, but can only handle a single attribute by translating between two domains. In this paper, we validate the use of attention to solve the more generalized problem of translating images for multiple domains.
|
{
"cite_N": [
"@cite_0",
"@cite_8",
"@cite_23",
"@cite_11"
],
"mid": [
"2895743108",
"2950893734",
"2963403868",
"2950178297"
],
"abstract": [
"Face attribute editing aims at editing the face image with the given attribute. Most existing works employ Generative Adversarial Network (GAN) to operate face attribute editing. However, these methods inevitably change the attribute-irrelevant regions, as shown in Fig. 1. Therefore, we introduce the spatial attention mechanism into GAN framework (referred to as SaGAN), to only alter the attribute-specific region and keep the rest unchanged. Our approach SaGAN consists of a generator and a discriminator. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-specific region which restricts the alternation of AMN within this region. The discriminator endeavors to distinguish the generated images from the real ones, and classify the face attribute. Experiments demonstrate that our approach can achieve promising visual results, and keep those attribute-irrelevant regions unchanged. Besides, our approach can benefit the face recognition by data augmentation.",
"In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
}
|
1811.07268
|
2900725142
|
In many applications of deep learning, particularly those in image restoration, it is either very difficult, prohibitively expensive, or outright impossible to obtain paired training data precisely as in the real world. In such cases, one is forced to use synthesized paired data to train the deep convolutional neural network (DCNN). However, due to the unavoidable generalization error in statistical learning, the synthetically trained DCNN often performs poorly on real world data. To overcome this problem, we propose a new general training method that can compensate for, to a large extent, the generalization errors of synthetically trained DCNNs.
|
The problem of statistical differences between synthetic and real data has long been overlooked in the literature of learning based image restoration. Not until recently have there been a few attempts to alleviate the problem in some specific applications, such as denoising and super-resolution. Most of these studies focus on improving the truthfulness of their training data synthesizers. For instance, to generate artificial noise for multi-image denoising, @cite_1 first converts images to linear color space with real data calibrated invert gamma correction and then generates noise with distribution estimated from real images. For single image denoising, @cite_7 employs a sophisticated noise synthesizer that takes multiple factors into consideration, such as signal-dependent noise and camera processing pipeline. In @cite_13 , the authors proposed a GAN-based neural network to generate realistic camera noise. The network is trained with high-quality images superimposed with noise patterns extracted from smooth regions of real noisy images, assuming that the high frequency components of these smooth regions only come from sensor noise and noise is independent to signal. These assumptions however are impractical and restrictive, making it difficult to extend the idea for other image restoration problems.
|
{
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_7"
],
"mid": [
"2798278116",
"2963200935",
"2832157980"
],
"abstract": [
"In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising.",
"We present a technique for jointly denoising bursts of images taken from a handheld camera. In particular, we propose a convolutional neural network architecture for predicting spatially varying kernels that can both align and denoise frames, a synthetic data generation approach based on a realistic noise formation model, and an optimization guided by an annealed loss function to avoid undesirable local minima. Our model matches or outperforms the state-of-the-art across a wide range of noise levels on both real and synthetic data.",
"While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy photographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative metrics and visual quality. The code has been made available at this https URL."
]
}
|
1811.07268
|
2900725142
|
In many applications of deep learning, particularly those in image restoration, it is either very difficult, prohibitively expensive, or outright impossible to obtain paired training data precisely as in the real world. In such cases, one is forced to use synthesized paired data to train the deep convolutional neural network (DCNN). However, due to the unavoidable generalization error in statistical learning, the synthetically trained DCNN often performs poorly on real world data. To overcome this problem, we propose a new general training method that can compensate for, to a large extent, the generalization errors of synthetically trained DCNNs.
|
The problem of inaccurate degradation model is also recognized by the authors of @cite_3 in their study of data-driven super-resolution. To alleviate the problem, they proposed a relatively shallow neural network, called ZSSR, which is trained only with patches aggregated from the input image. However, ZSSR still relies on bicubic downsampling to synthesize the corresponding low-resolution patches; the problem of the unrealistic downsampler is left unaddressed.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2779113761"
],
"abstract": [
"Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method."
]
}
|
1811.07240
|
2963827314
|
Recent character and phoneme-based parametric TTS systems using deep learning have shown strong performance in natural speech generation. However, the choice between character or phoneme input can create serious limitations for practical deployment, as direct control of pronunciation is crucial in certain cases. We demonstrate a simple method for combining multiple types of linguistic information in a single encoder, named representation mixing, enabling flexible choice between character, phoneme, or mixed representations during inference. Experiments and user studies on a public audiobook corpus show the efficacy of our approach.
|
Representation mixing is closely related to the "mixed-character-and-phoneme" setting described in Deep Voice 3 @cite_9 , with the primary difference being our addition of the mask embedding @math . We found utilizing a mask embedding alongside the character and phoneme embeddings further improved quality, and was an important piece in the text portion of the network. The focused user study in this paper also highlights the advantages of this kind of mixing independent of high-level architecture choices, since our larger system is markedly different from that used in Deep Voice 3.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2963691546"
],
"abstract": [
"We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server."
]
}
|
1811.07497
|
2901051291
|
The geolocation of online information is an essential component in any geospatial application. While most of the previous work on geolocation has focused on Twitter, in this paper we quantify and compare the performance of text-based geolocation methods on social media data drawn from both Blogger and Twitter. We introduce a novel set of location specific features that are both highly informative and easily interpretable, and show that we can achieve error rate reductions of up to 12.5 with respect to the best previously proposed geolocation features. We also show that despite posting longer text, Blogger users are significantly harder to geolocate than Twitter users. Additionally, we investigate the effect of training and testing on different media (cross-media predictions), or combining multiple social media sources (multi-media predictions). Finally, we explore the geolocability of social media in relation to three user dimensions: state, gender, and industry.
|
Previous work on geolocation can be grouped into three broad categories. The first type relies on network infrastructure, and use geolocation databases to map the IP address of the users to their geographic location @cite_17 @cite_22 . Another set of approaches make use of social network relations and geolocate social media users based on their or relations @cite_25 @cite_41 @cite_3 @cite_2 @cite_36 ; the intuition here is that frequent interactions tend to occur between users with close geographic proximity. Finally, the third category of methods, also endorsed in this paper, relies on the textual content generated by social media users. In this section, we review this latter type of approaches.
|
{
"cite_N": [
"@cite_22",
"@cite_41",
"@cite_36",
"@cite_3",
"@cite_2",
"@cite_25",
"@cite_17"
],
"mid": [
"2130850880",
"1495515969",
"2296291818",
"2069090820",
"2057817918",
"2168346693",
"2111520480"
],
"abstract": [
"The most widely used technique for IP geolocation consists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolocation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free- to have an insight of the limitations in their usability. First, the vast majority of entries in the databases refer only to a few popular countries (e.g., U.S.). This creates an imbalance in the representation of countries across the IP blocks of the databases. Second, these entries do not reflect the original allocation of IP blocks, nor BGP announcements. In addition, we quantify the accuracy of geolocation databases on a large European ISP based on ground truth information. This is the first study using a ground truth showing that the overly fine granularity of database entries makes their accuracy worse, not better. Geolocation databases can claim country-level accuracy, but certainly not city-level.",
"User interaction in social networks, such as Twitter and Facebook, is increasingly becoming a source of useful information on daily events. The online monitoring of short messages posted in such networks often provides insight on the repercussions of events of several different natures, such as (in the recent past) the earthquake and tsunami in Japan, the royal wedding in Britain and the death of Osama bin Laden. Studying the origins and the propagation of messages regarding such topics helps social scientists in their quest for improving the current understanding of human relationships and interactions. However, the actual location associated to a tweet or to a Facebook message can be rather uncertain. Some tweets are posted with an automatically determined location (from an IP address), or with a user-informed location, both in text form, usually the name of a city. We observe that most Twitter users opt not to publish their location, and many do so in a cryptic way, mentioning non-existing places or providing less specific place names (such as “Brazil”). In this article, we focus on the problem of enriching the location of tweets using alternative data, particularly the social relationships between Twitter users. Our strategy involves recursively expanding the network of locatable users using following-follower relationships. Verification is achieved using cross-validation techniques, in which the location of a fraction of the users with known locations is used to determine the location of the others, thus allowing us to compare the actual location to the inferred one and verify the quality of the estimation. With an estimate of the precision of the method, it can then be applied to locationless tweets. Our intention is to infer the location of as many users as possible, in order to increase the number of tweets that can be used in spatial analyses of social phenomena. The article demonstrates the feasibility of our approach using a dataset comprising tweets that mention keywords related to dengue fever, increasing by 45 the number of locatable tweets.",
"Geolocated social media data provides a powerful source of information about place and regional human behavior. Because little social media data is geolocation-annotated, inference techniques serve an essential role for increasing the volume of annotated data. One major class of inference approaches has relied on the social network of Twitter, where the locations of a user's friends serve as evidence for that user's location. While many such inference techniques have been recently proposed, we actually know little about their relative performance, with the amount of ground truth data varying between 5 and 100 of the network, the size of the social network varying by four orders of magnitude, and little standardization in evaluation metrics. We conduct a systematic comparative analysis of nine state-of-the-art network-based methods for performing geolocation inference at the global scale, controlling for the source of ground truth data, dataset size, and temporal recency in test data. Furthermore, we identify a comprehensive set of evaluation metrics that clarify performance differences. Our analysis identifies a large performance disparity between that reported in the literature and that seen in real-world conditions. To aid reproducibility and future comparison, all implementations have been released in an open source geoinference package.",
"Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"This paper presents an approach to geolocating users of online social networks, based solely on their 'friendship' connections. We observe that users interact more regularly with those closer to themselves and hypothesise that, in many cases, a person's social network is sufficient to reveal their location. The geolocation problem is formulated as a classification task, where the most likely city for a user without an explicit location is chosen amongst the known locations of their social ties. Our method uses an SVM classifier and a number of features that reflect different aspects and characteristics of Twitter user networks. The SVM classifier is trained and evaluated on a dataset of Twitter users with known locations. Our method outperforms a state-of-the-art method for geolocating users based on their social ties.",
"Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities. Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users.",
"The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96 of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.